The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Thu, 19 Dec, 12:04 AM UTC
7 Sources
[1]
The New York Times made headlines by suing OpenAI this year, but the case against AI isn't so black and white
AI companies found themselves in legal hot water throughout 2024, and the courtroom battle between the New York Times and OpenAI may have a long-lasting impact on how AI is developed in the future. On April 30, 2024, eight of the nation's largest daily newspapers, including the New York Times, filed a lawsuit against OpenAI and Microsoft, claiming both companies used news articles without permission to train their AI models. It's among the most significant cases launched against AI developers to date and follows a separate, ongoing lawsuit filed on December 27, 2023, in which the New York Times claimed OpenAI committed copyright infringement through ChatGPT. When I asked ChatGPT about these cases, it dutifully gave me a summary of the 2023 case, including three main points: copyright infringement, a lack of compensation for using copyrighted content, and the ongoing nature of the case. Ironically, in explaining this case to me, ChatGPT may have pulled from New York Times articles, highlighting the exact issue at hand -- or, at the very least, the issue that the newspaper has. But is this case as black and white as The Times' own masthead? Given that its outcome could determine how companies handle the training of future models for potentially decades to come, it's worth taking a closer look. At the time of writing, OpenAI is just one of several AI companies facing dozens of lawsuits. These cases often focus on some form of copyright infringement, with plaintiffs ranging from individual YouTube creators to media conglomerates and music studios. When hearing about claims of copyright infringement by AI, musicians, actors, and writers may first come to mind. Hollywood has been at the center of tensions with the AI industry for years now, but the press and news outlets also play an important role in the battle against the misuse of AI. Cases like the 2023 and 2024 New York Times lawsuits aren't just about protecting journalists from losing their jobs to AI. Unlike AI-related cases in Hollywood, these cases focus more on the exploitation of news articles by AI chatbots and the risks of those chatbots spreading misinformation under the guise of legitimate news sources. The possibility of generative AI fabricating news isn't hypothetical, either. It's already happening. For example, in 2023, The Guardian blocked ChatGPT from scraping its website for training data after readers reported that the chatbot was generating news articles under Guardian reporters' bylines... that those reporters never actually wrote. Similarly, the 2024 New York Times lawsuit against AI cited multiple examples of ChatGPT generating false product recommendations attributed to news outlets. This completely fabricated information isn't just an inconvenient AI blunder. It may cause serious harm to readers who believe they're getting information from a trustworthy source. As the lawsuits claim, this can also damage newspapers' reputations if unsuspecting readers don't realize they're getting a recommendation or story that's completely made up. To make matters even more complicated, these lawsuits also raise the issue of lost revenue. It's no secret that newspapers rely on paid subscriptions to stay in business. AI models can give readers access to text from copyrighted news articles without paying for access to that content. So, the New York Times and the other news outlets joining it in the 2024 lawsuit claim they are owed compensation for OpenAI's use of their content. What all of this amounts to is a question of accountability. Can AI companies profit off of content they weren't authorized to use? Are they responsible for the damage done when their algorithms generate false or misleading content? The Times' argument is sound enough, but as with any lawsuit, there are two sides to every story. In response to the lawsuit filing, OpenAI would accuse The Times of incomplete reporting in a post titled OpenAI and Journalism published on their website in January 2024, stating "We support journalism, partner with news organizations, and believe the New York Times lawsuit is without merit." On the topic of copyrighted works appearing within ChatGPT, OpenAI offered insight claiming, "[The New York Times] had mentioned seeing some regurgitation of their content but repeatedly refused to share any examples, despite our commitment to investigate and fix any issues." Exploring these "regurgitations," the name given to a bug encountered by OpenAI's models when it offers up training data as part of an answer, further, Open AI states "Interestingly, the regurgitations The New York Times induced appear to be from years-old articles that have proliferated on multiple third-party websites. "It seems they intentionally manipulated prompts, often including lengthy excerpts of articles, in order to get our model to regurgitate." From OpenAI's perspective, The Times' claims result from attempted "jailbreaking" efforts to force ChatGPT to reveal training data. As the company puts it, "this misuse is not typical or allowed user activity ... we are continually making our systems more resistant to adversarial attacks to regurgitate training data, and have already made much progress in our recent models." Reinforcing his company's stance, OpenAI CEO Sam Altman commented on the lawsuit at the recent Dealbook Summit 2024 in December, stating "I think the New York Times is on the wrong side of history in many ways." It's not just copyright infringement on the table in these landmark AI lawsuits -- in cases of copyright infringement in particular, the outcomes could completely reshape the way AI models are trained in the future. To some, LLMs like ChatGPT are simply spewing out pieces of pre-learned information to users, scraped from the training materials it was initially fed, raising alarm bells over how much of this "new" information contains another's work. However, AI developers claim that models such as ChatGPT don't work that way. To them, even if a model is trained on copyright-protected data, it's no different than somebody learning to play guitar by playing riffs from their favorite bands. If this person were to go on to create music of their own, they wouldn't be breaking copyright simply by having learned to do so by listening and learning to others who already could. The jury is still out on which side of the argument will prevail in court, with both sides making strong cases. However, perhaps ChatGPT's excellence in delivering content that is so human-like will be its downfall, similar to the misunderstanding of the widely attributed Pablo Picasso quote (which, ironically, is also possibly misattributed): "Good artists copy, great artists steal." If you're anything from an AI enthusiast to the average AI tinkerer (or simply seeking out some of the additional features offered through Windows Copilot+ PCs or Apple Intelligence on Macs), then you'll need a powerful and performative laptop to keep up to speed with your needs. At Laptop Mag, we review laptops year-round to ensure we're giving you expert-backed and up-to-date recommendations on which notebook is right for you. When it comes to the best AI PC category, our top picks are the excellent Asus Zenbook S 14 (UX5406) for Windows users and the impressive Apple Macbook Air M3 for those running macOS.
[2]
In 1999, Nvidia invented the GPU. In 2024, it is powering the AI revolution
Twenty-five years ago, Nvidia changed the world of computing forever, sparking a ripple effect that is reshaping technology all around us today. On October 11, 1999, Nvidia released its first graphics processing unit (GPU), the GeForce 256. It revolutionized PC gaming, which was still in its infancy at the time. While Nvidia has had a massive impact on PC gaming in the decades since then, the GeForce 256 also set in motion Nvidia's path toward tech supremacy, not just in gaming, but in artificial intelligence. As of August 2024, Nvidia controls a staggering 80% of the AI chips market, and CEO Jensen Huang isn't slowing down any time soon. Huang made headlines in May when Nvidia announced during a quarterly earnings call that it will release a new AI chip every year, doubling its release speed from its previous two-year schedule. This move is not only ambitious but a necessary response to the rapid growth of AI over the past year. 2024 saw an explosion in new AI apps and products, from AI dog collars to the jaw-dropping SORA video generator. For AI to advance and evolve even further in 2025, AI developers need ever-more powerful chips to train and run their algorithms. That's where Nvidia comes in. Behind every AI model you ask to write text for you or generate a meme, there's probably an Nvidia chip hard at work. At least, for now. Can Nvidia's ambitious yearly release schedule keep it ahead in the competitive AI arms race? Nvidia was founded in Fremont, California in 1993 by then-30-year-old Jensen Huang and two friends, Chris Malachowsky and Curtis Priem. Huang remains at the helm of Nvidia over 30 years later. Malachowsky is also still at Nvidia as a member of its executive staff and a senior technology executive. Priem retired from the company in 2003. The trio was inspired to found their own chip company after witnessing the amazing progress in 3D graphics that was starting to take shape in the early 90s. It only took six years for Nvidia to launch the world's first GPU, the GeForce 256, which revolutionized PC gaming. To this day, Nvidia remains one of the two main GPU brands dominating the PC gaming market. AMD is its main rival in the space, but Nvidia's chips are widely considered the gold standard among gamers. That success has helped Nvidia earn a place among the most successful companies in the world, valued at over $3 trillion. Nvidia isn't settling for success in just gaming, though. GPUs are capable of more than rendering stunning graphics. They are also the perfect solution for the intense processing power needed to develop and run large language models (LLMs), like OpenAI's ChatGPT. You have probably used an AI model powered by one of Nvidia's chips without even realizing it. OpenAI and Meta use Nvidia's H100 GPU to train their AI models. The H100 chip has become one of the most sought-after GPUs in the world amidst the AI boom. In fact, demand for Nvidia's AI-capable GPUs has grown so much that its upcoming Blackwell chips are expected to cost upwards of $30,000. The skyrocketing clamor for GPUs capable of running AI leaves many analysts wondering if we could soon face another GPU shortage like the infamous 2020 drought, driven by a cryptocurrency boom. While other companies are making GPUs, like AMD and Intel, there's no denying Nvidia has a firm hold on the market, for both gamers and AI developers. The question is, how much longer can Nvidia remain at the top? Nvidia's initial rise to success was driven by innovation in a budding market. In the early 90s, that was PC gaming and 3D graphics. For Nvidia to not only stay on top but to grow throughout the 2020s and beyond, it will have to innovate again, this time in the AI market. Nvidia is already well on the way to accomplishing that, as its H100s chips prove. However, Jensen Huang knows countless competitors are hungering to take a slice of Nvidia's market share. At the 2023 New York Times DealBook Summit, Huang admitted a persistent fear of Nvidia going out of business, commenting in an interview, "I don't wake up proud and confident. I wake up worried and concerned." Huang explained, "I don't think people are trying to put me out of business -- I probably know they're trying to, so that's different. I live in this condition where we're partly desperate, partly aspirational." This awareness of Nvidia's highly coveted, yet precarious spot at the top of the chip market explains Huang's decision to move to a yearly release cadence. The decision isn't just about self-preservation and market leadership, though. For Huang, it may also be about pursuing a vision for AI and the possibilities we have yet to achieve. In the 2023 DealBook Summit interview, Huang said, "There's a whole bunch of things that we can't do [with AI] yet. We can't reason yet, this multi-step reasoning that humans are very good at." Huang theorized that we could see early examples of Artificial General Intelligence (AGI) within the next five years. Interestingly, he also stressed that AI is part of the innovation powering its own evolution. The H100 chips were designed with assistance from AI and Huang has been adamant that AI will continue to play a major role in innovation at Nvidia. Perhaps that alone is reason to be confident in Nvidia's continued success. Nvidia is arguably one of the best examples of the power of human and AI collaboration, one that could soon lead to some of the most staggering advancements in the history of computing. If you're anything from an AI enthusiast to the average AI tinkerer (or simply seeking out some of the additional features offered through Windows Copilot+ PCs or Apple Intelligence on Macs), then you'll need a powerful and performative laptop to keep up to speed with your needs. At Laptop Mag, we review laptops year-round to ensure we're giving you expert-backed and up-to-date recommendations on which notebook is right for you. When it comes to the best AI PC category, our top picks are the excellent Asus Zenbook S 14 (UX5406) for Windows users and the impressive Apple Macbook Air M3 for those running macOS.
[3]
Google's AI Overview broke the Internet this year -- is AI ready to replace search?
Generative AI might act like a search algorithm, but don't fall for it It was a sunny afternoon in Mountain View, California on May 14, 2024, when Google changed the course of the Internet forever. Google's Liz Reid, Vice President and Head of Google Search, took the stage a little over 40 minutes into the keynote presentation at Google I/O to unveil one of the most controversial search updates Google has ever launched. "With each of these platform shifts, we haven't just adapted. We've expanded what's possible with Google search. And now, with generative AI, search will do more for you than you ever imagined." Reid was referring to AI Overview, formerly "Search Generative Experience" or SGE, which launched in the U.S. that same day. At first glance, the feature seems fairly tame: a specialized version of Google's Gemini AI model generates summaries of search queries based on the top results for that query. Unfortunately for Google (and the rest of us), AI Overview had a rocky start, to say the least. It was immediately caught generating odd, inaccurate, and unsafe summaries and pulling from strange sources like ancient Reddit threads. The slew of fumbles shared across social media quickly turned AI Overview into a meme, but generative AI's role in the future of search is anything but a joke. Reid kicked off her demo of AI Overviews by trying to reassure the audience: "Whatever's on your mind and whatever you need to get done, just ask and Google will do the Googling for you." The idea of Google taking care of the trouble of Googling might sound nice, but it turns out Gemini isn't very good at taking over Search, or at least it wasn't at first. In the week after the feature's launch, users spotted a slew of strange AI-generated results and wasted no time sharing them all over the Internet. The most infamous example suggested baking pizza with glue. Another recommended cleaning your washing machine with chlorine gas. Other results ranged from eating rocks to cooking spaghetti with gasoline. Long story short, AI Overview went viral -- for all the wrong reasons. On May 30, 2024, just a couple of weeks after Google I/O, Liz Reid shared an update on AI Overview explaining that Google was working to address the problematic results and prevent more from popping up. Reid clarified that some of the strangest results were emerging from "sarcastic or troll-y content from discussion forums," such as the glue pizza result. She also pointed out that some of the screenshots of nonsensical AI Overview results were connected to queries no one would reasonably ask to begin with, such as "How many rocks shall I eat." After that initial turbulent start, Google manually removed some AI Overview results and added more guardrails to prevent other problematic summaries from appearing. Even so, many users were still bothered by the new feature, especially after they realized there was no way to turn it off. At the time of writing, the only way to remove AI Overview from search results is through workarounds like browser extensions. For better or worse, Google seems to be charging full-steam ahead on integrating generative AI into Search. The question, or "query" as Google might say, is whether or not generative AI is truly capable of replacing Search as we know it. Many of us (myself included) got a good laugh out of Google's unhinged AI-generated results. However, memes are all well and good but as the saying goes, it's all fun and games until someone gets hurt. Most people would have the common sense not to try many of the nonsense recommendations mentioned above, but there are still plenty of cases where AI Overview might generate something false, misleading, or dangerous that isn't immediately obvious as an error. There are many potential pitfalls to replacing traditional search engines with generative AI. Perhaps the most worrying one is the fact that we haven't fixed AI's lack of accuracy yet. Countless research studies have shown generative AI is riddled with bias and inaccuracies, although they are often not immediately obvious. When generative AI is advertised as a replacement for traditional search, many people may make the mistake of never double-checking the "information" they get from AI models like Gemini or ChatGPT. AI is so good at replicating realistic language that it can make completely fabricated information seem legitimate. That's how we end up with situations like lawyers citing fictional legal cases generated by ChatGPT or AI-generated scientific research papers appearing on Google Scholar. Is there potential for generative AI to replace web searching as we know it? Possibly, with much more development and far better accuracy guardrails. However, Google is not waiting to launch this technology. It's already here, for better or worse. So, before searching with ChatGPT or blindly trusting AI Overviews, do your own Googling first. If you're anything from an AI enthusiast to the average AI tinkerer (or simply seeking out some of the additional features offered through Windows Copilot+ PCs or Apple Intelligence on Macs), then you'll need a powerful and performative laptop to keep up to speed with your needs. At Laptop Mag, we review laptops year-round to ensure we're giving you expert-backed and up-to-date recommendations on which notebook is right for you. When it comes to the best AI PC category, our top picks are the excellent Asus Zenbook S 14 (UX5406) for Windows users and the impressive Apple Macbook Air M3 for those running macOS.
[4]
Microsoft created the single biggest AI controversy this year -- and it might make AI more thoughtful
Microsoft learned the hard way that one thing matters more to users than convenience On May 20, 2024, I watched live as Microsoft's Pavan Davuluri announced one of the most controversial features in the company's history, possibly even tech history at large. Davuluri, Microsoft's Corporate Vice President of Windows + Devices, kicked things off by laying out Microsoft's vision for the Copilot+ experience: "Windows has always believed in making technology accessible to everyone. Today, we're carrying that belief into the new era of AI with a reimagined core architecture for the PC, weaving AI into every layer, from the chip to Windows to the cloud." When Davuluri said Microsoft was "weaving AI into every layer" of your laptop, he wasn't kidding. The feature he announced just seconds later proved it: Recall. Microsoft Recall uses AI to track and interpret everything you do on your Windows 11 computer. At regular intervals, it takes a screenshot of whatever is on your display and saves it in case you ever want to recall what you were doing, searching for, or working on at a later time. For example, you could ask Copilot to remember a pair of shoes you were shopping for a month ago or find an essay draft you lost in your files last week. That might sound useful, but I'll admit that as I watched Davuluri demonstrate Recall that day, I was more concerned than impressed. Microsoft quickly discovered that I wasn't the only one concerned, perhaps even scared. Having an AI watch everything you do all the time, in case you need help finding it later, requires a lot of trust in a big tech company. As it turns out, most people don't trust Microsoft, leading to an explosion in controversy over Recall. Over the months following that original announcement, Microsoft learned the hard way that privacy matters far more than convenience in this early era of AI for all. The controversy -- as you'll read below -- might make future AI programs more thoughtful about how artificial intelligence uses their data. Early in his Recall demo, Davuluri explained that users can control what Recall sees by turning it off or blocking it from screenshotting specific websites. He noted, "Even the AI running on your device can't access private content." Unfortunately, this anecdote was not enough to quiet fears about the risks Recall posed for privacy and security. When the feature was initially announced on May 20, the day before Microsoft Build 2024 kicked off, Microsoft had planned to launch Recall as a preview feature a month later on June 18, 2024. However, the backlash over privacy concerns was so intense that Microsoft paused that release... a long pause. People were quick to notice serious vulnerabilities in Recall. On May 31, just days after the feature's announcement, Kevin Beaumont, a cybersecurity expert and former Senior Threat Intelligence Analyst at Microsoft, published a post on Medium explaining the slew of security risks he found in Recall. The article's title says it all: "Stealing everything you've ever typed or viewed on your own Windows PC is now possible with two lines of code -- inside the Copilot+ Recall disaster." Beaumont was one of many cybersecurity experts calling out what he saw as significant issues in Recall's design. Users were right to be concerned, too. As Beaumont revealed, the original version of Recall stored data as plain text files with little to no protection from unauthorized access. There was also the apparent risk of the AI recording sensitive information, such as private conversations, passwords, or financial data. Just a couple of weeks after Recall's announcement, on June 7, 2024, Microsoft finally responded to the outcry. It added clear yes and no buttons to opt-in for Recall during the setup process for Windows 11 PCs and required Windows Hello to use Recall for an added layer of security. Microsoft delayed the preview release of Recall twice, with updates and attempts to address the plethora of security issues throughout the year. Finally, on December 6, Recall launched as a preview feature only available to members of the Windows Insider Program. Unfortunately for Microsoft, even after months of reworking the feature, security issues with Recall were spotted within days. Tom's Hardware caught Recall capturing credit card numbers and social security numbers even with the "sensitive information" filter turned on. Apparently, when AI is woven into "every layer" of the Windows experience, it's hard to unweave it when you want your privacy back. The rampant security concerns with Microsoft Recall have a critical lesson for tech companies and AI developers: convenience is not worth sacrificing privacy for most people. I've noticed a recurring theme this year: AI can do many cool things, but it does so at the cost of enormous amounts of data, often your data. This reality is that AI still creates friction across many audiences, from celebrities campaigning against deepfakes to everyday users concerned about privacy with new Windows features. AI is finding its way into our tech, for better or worse. There are some incredible ways AI can be used for good, even to save lives. However, the need for massive amounts of data to power and create AI models also creates very real possibilities for this tech to be used for harm, even to ruin lives. We need to reckon with that for AI to be part of a future where technology is used to build a better world. The Microsoft Recall debacle should serve as a warning for other tech companies. Over the past few decades, tech has been used to make our lives more convenient, but Recall proved there's a limit to that. The reality -- right now, anyway -- is that most people don't want tech to do everything for them. The ability to have an AI remember where you store your files is not worth letting that same AI track everything you do on your laptop. It's certainly not worth the risk of a hacker snatching your social security number from a screenshot with just a couple of lines of code. In the age of AI, there's at least one thing far more critical than convenience: privacy and respecting users enough to let them control it. If you're anything from an AI enthusiast to the average AI tinkerer (or simply seeking out some of the additional features offered through Windows Copilot+ PCs or Apple Intelligence on Macs), then you'll need a powerful and performative laptop to keep up to speed with your needs. At Laptop Mag, we review laptops year-round to ensure we're giving you expert-backed and up-to-date recommendations on which notebook is right for you. When it comes to the best AI PC category, our top picks are the excellent Asus Zenbook S 14 (UX5406) for Windows users and the impressive Apple Macbook Air M3 for those running macOS.
[5]
It's Slim Pickings, But Here Are 8 AI Releases From 2024 We Actually Liked
Reporting on the AI industry in 2024 has been a wild ride. As the year comes to a close, we find ourselves asking, "Did anything good come out of it?" We published 725 articles on AI this year, with hundreds more that touch on it in some way. Nearly every new device, whether it's a phone, PC, or robot vacuum, now advertises flashy "AI-powered" gobbledygook. We, like many consumers, view most of these features as marketing fluff and party tricks, not the overhaul of modern life as touted by the tech industry. But there were some gems that caught our eye. Here are some of our favorite AI feature releases from 2024, but we're hoping for much more in 2025. 1. Samsung Live Translate It's looking like AI may be able to take a sledgehammer to language barriers. Samsung's newest Galaxy phones and tablets, for example, let you press a button when you're on a call for a real-time translation. Samsung likens it to turning on closed captions when watching a show, because you'll also see a live text translation in addition to hearing the audio. "A Samsung representative spoke Korean while I spoke English during a call," writes PCMag mobile analyst Iyaz Akhtar. "I was able to hear the person talking in Korean, then an automated English voice relayed the translation. When I spoke, the caller heard me in automated Korean. There is a small delay in processing the audio, but this feature could be useful for traveling." This also has the potential to open up business communication with global clients, or even help you speak with neighbors and in-laws. 2. Google 'Add Me' There are no shortage of AI-powered photo-editing options. But one of the most unique is the Add Me feature on the Google Pixel 9 Pro XL. It allows you to edit yourself into pictures in a much less clunky way than Photoshop by combining two pictures, one with you and one without. Select Add Me in the Camera app, and the phone will guide you through the instructions. First, take a picture of the group while leaving space for yourself. Then, ask someone else to take another picture of you in the empty space. The phone will guide the photographer on how to properly align the second photo. Then, it combines the two photos and, voila! You're in the group. This solves a problem of wanting to take a group picture, but there's no one around to capture it. Awkward group selfies could be a thing of the past. 3. ChatGPT Advanced Voice Mode, Gemini Live This year marked the first time a few of us felt like we could have a truly humanlike conversation with a computer. Clunky, awkward voice tech that plagued Apple Siri and Amazon Echo in years past is being supplanted with fluid, rolling banter. Really. OpenAI and Google released the most compelling voice tech we saw this year, with ChatGPT Voice Mode and Gemini Live. Users can select the voice they like most, choosing from 10 on Gemini Live and four on Voice Mode (sorry, no ScarJo voice). Notably, the conversation doesn't derail if you pause or stumble over your words. You can ask the AI about anything you'd like, from European History trivia to availability at a nearby restaurant. It's still probably clunkier than just typing into the search bar, or asking a chatbot, but the fluidity of the conversations is an impressive improvement in this area. OpenAI enhanced Voice Mode recently by adding the ability to share your video or screen with the AI. This brings Voice Mode even closer to having a conversation with a person, in which you discuss something you can both see. It's only available to ChatGPT Plus and Pro subscribers today (and not yet in the EU). It's rolling out to Enterprise and Edu users next month. 4. Google NotebookLM No form of online content is safe from AI-ification, and that includes podcasts. Google added a new capability to NotebookLM in August that turns your scattered notes into a full-blown podcast discussion between two AI hosts. One male and one female voice chat about the material in an incredibly humanlike, though at times cringeworthy, tone. They make jokes, play off one another, and raise probing questions about the material. Spotify partnered with Google to add a NotebookLM-powered feature to its annual Spotify Wrapped. I tapped NotebookLM to turn my childhood diary into a discussion about being a kid in the modern world, which was surprisingly insightful considering the simplistic, childhood thoughts I uploaded. Google plans to continue building on the feature; the latest update allows the human listener to join the conversation with the AIs and ask questions. 5. Windows Studio Effects This one is more practical than magical, and earned a spot on the list because it's something our editors are using regularly to improve the quality of dreaded video calls. Windows Studio Effects, introduced with Windows 11, uses AI to automatically improve your picture, auto-blur your background, and adjust the frame of view to optimize the shot -- all without slowing down your PC's performance. It also corrects for the times when someone may be chatting with you, but not looking into the camera. Maybe they're looking at another monitor, or their camera is installed above them and they're looking at the screen. Either way, there's a loss of human connection without eye contact. On devices like the Microsoft Surface Laptop Studio 2, the AI will artificially adjust your eyeballs toward the camera to correct this. "While you may have mixed feelings on AI 'faking' the fact that you are making eye contact, it does indeed work, maybe even unsettlingly well," says PCMag's Matt Buzzi. "In some professional environments, it may win you some plaudits or subconscious approval." 6. Claude Artifacts Anthropic, the AI firm behind Claude AI, released a compelling new design that could make chatbots more effective and enjoyable to use. Admittedly, this more of an interface upgrade than a technological breakthrough, but it could help chatbot adoption, which is half the battle. The new feature, dubbed Artifacts, introduces a split-screen view to Claude. Rather than the conversation existing in one long, scrolling chat, it places questions on one side and the chatbot's output on the other. That means you can ask it to write an essay, generate a snippet of code, or create a picture, and then view what it comes up with in another panel. This is a cleaner, more productive way to look at the output while suggesting improvements to it. OpenAI followed up with its own version, dubbed ChatGPT Canvas, a month later. It promises a more iterative way to interact and improve the chatbot's output as well, and is now free to all users as of this month. 7. DuckDuckGo AI Chat It's no secret that the large language models (LLMs) that power AI chatbots are constantly feeding on the data you provide them, and using that data to train their models. With their ability to chat about such a wide range of personal and professional topics, you could be forking over some seriously personal information, unless you use an option like DuckDuckGo AI Chat. Since it debuted over the summer, this privacy-focused LLM has become a go-to chatbot for our editors. It provides a free and "anonymous" way to access popular chatbots without exposing your data to AI training. It builds a wall between the user and AI models from OpenAI, Anthropic, and Meta, and is an on-brand offering for DuckDuckGo, which already has a popular, privacy-protecting web browser. 8. iPhone 16 Camera Control with Apple Intelligence Though Apple Intelligence has been a bit of a letdown so far, one feature that debuted this month with iOS 18.2 caught our eye. Paired with the Camera Control button on the iPhone 16, Visual Intelligence can help identify a statue or monument, summarize text on a piece of paper, and even translate text from an image. Through a ChatGPT integration, also new with iOS 18.2, you can ask OpenAI's chatbot about the photo as well. To be fair, Google Lens already offers this type of camera-based AI. ChatGPT, Claude, and other chatbots can also interpret a photo you upload. We've also seen other iterations on this theme of searching the web through the camera, such as Google's Circle to Search. Apple adding this capability to the iPhone may popularize a new way to search the web. It offers a more real-world focus than a typical Google search, or even a typical chatbot conversation, and stands out as a net-new capability AI enables. What are your favorite AI features that launched this year? Let us know in the comments.
[6]
No one wants another chatbot. This is the AI we actually need
Let's be honest: we're drowning in AI chatbots -- and nobody really asked for more of them. Tools like ChatGPT, Google Gemini, and an endless stream of me-too AI assistants can draft emails, answer trivia, and summarize articles. They're clever and well-trained, but strip away the gloss, and what are they? Fancy search engines that are closer to the uncanny valley than approximating real human interaction. They respond but don't genuinely understand who we are, why we're stressed, or what we need on a deeper, more personal level. Also: I'm an AI tools expert, and these are the only two I pay for The ultimate promise of AI has always felt closer to science fiction: the intuitive support of KITT from Knight Rider, the loyal companionship of C-3PO from Star Wars, or the deep understanding of Commander Data from Star Trek. These characters don't just execute tasks -- they grasp context, emotion, and our evolving human complexities. Yet, for all our technological progress, today's AI tools remain light years away from that vision. I've been a paying subscriber to ChatGPT since it launched, and I've watched it improve. Sure, it can remember certain things across sessions, letting you maintain a more continuous conversation. However, these chatbot memories are limited by model boundaries; they can't fully integrate their knowledge into an evolving narrative of my life -- or map my emotional states or long-term ambitions. Think of them as diligent but low-EQ assistants -- better than starting from scratch each time, but still nowhere near "getting" me as a whole person. Make no mistake, none of these models -- ChatGPT, Apple Intelligence, Google's Gemini, Meta.ai, or Perplexity -- are anywhere close to the holy grail of General AI. They remain fundamentally task-specific information retrieval tools, and their incremental memory or summarization improvements are far from game-changers. Many of the intuitive, empathetic capabilities we yearn for remain out of reach. Also: I test wearable tech for a living. These are my favorite products of 2024 Fundamental advancements are still needed to transform today's chatbots into something more -- something that can sense when we're stressed or overwhelmed, not just when we need another PDF summarized. After over a year of wrangling with "advanced" assistants, I've realized we need more than coherent answers. We need AI woven directly into our routines, noticing patterns and nudging us toward healthier habits -- something that can rescue us from sending that hasty, frustration-fueled email before we regret it. Think about it: an AI that knows your calendar, documents, chats, health metrics, and maybe even your cognitive state could sense when you're fried after back-to-back Zoom calls or skip lunch because your inbox is exploding. Instead of passively waiting for you to type commands, the AI can proactively suggest a break, rearrange your schedule, or hit pause on that doom-scrolling session. In other words, we need AI to evolve from a fancy command line into an empathetic, intelligent partner. But how do we get there? To break the cycle of incrementalism, we need more than clever conversation. Non-invasive brain-computer interfaces (BCIs), such as Master & Dynamic's EEG-driven headphones powered by Neurable's technology, might be the key. Also: Apple Vision Pro can be controlled by thoughts now, thanks to BCI integration Neurable's tech measures brainwaves to gauge attention and focus. This is cool as a productivity hack, but it's even cooler when you imagine funneling that data into a broader AI ecosystem that adapts to your mental state in real time. I spoke with Dr. Ramses Alcaide, CEO of Neurable, who explained how their EEG technology delivers near-medical-grade brain data from compact sensors placed around the ears, achieving about 90% of the signal quality traditionally limited to bulky EEG caps. "The brain is the ultimate wearable," Alcaide told me, "and yet we're not tracking it." By translating subtle electrical signals into actionable insights, Neurable's approach helps align work, study, and downtime with our natural cognitive rhythms. Instead of forcing ourselves into rigid 9-to-5 blocks, we might schedule creative projects during a personal focus peak or plan a break when attention wanes -- optimizing our daily flow for sharper performance and less mental fatigue. However, EEG represents just one avenue in a rapidly evolving field. Future non-invasive methods, such as wearable magnetoencephalography (MEG) systems, could detect the brain's faint magnetic fields with even greater precision. While MEG historically required room-sized equipment and special shielding, emerging miniaturized versions may one day read brain activity as effortlessly as today's smartwatches track steps. Also: I tried the mind-reading headphones that got the internet buzzing. Here's my verdict This could let AI differentiate between a stress-induced slump and simple mental boredom, offering precisely targeted support. Imagine a language tutor that scales back complexity when it senses cognitive overload or a mental health app that flags early cognitive or mood changes, prompting preventive self-care before issues escalate. The potential goes well beyond gauging focus or presence. With richer, more granular data, AI could detect how well you internalize a new skill or concept and fine-tune lesson plans in real time to maintain engagement and comprehension. The AI could also consider how your sleep quality or diet influences cognitive performance and suggest a short meditation before a big presentation or advise you to reschedule a challenging meeting if you're running on empty. In a high-stakes moment, like drafting an emotionally charged email, your AI might sense brewing frustration and gently suggest a brief pause -- functioning more like a caring GERTY-from-Moon than a domineering HAL -- nudging you toward wise choices without overriding your autonomy. Also: 7 ways to write better ChatGPT prompts - and get the results you want faster This adaptive, human-centered support is already taking shape in simpler forms. Some professionals reschedule challenging tasks to their mental prime, while students use basic tools to identify their best study times. Individuals with ADHD employ feedback on focus levels to better structure their environments. As sensors improve and the analytics powering them become more sophisticated, AI will evolve into an empathetic, context-aware partner. Instead of pushing us to grind harder, it will encourage smarter, more sustainable work patterns -- steering us away from burnout and toward genuine cognitive well-being. Brain data is just one piece of the puzzle. Another key element is building flexible AI ecosystems composed of multiple specialized "AI People." Natura Umana, operating in stealth since 2022, is taking a bold step in this direction with its upcoming Nature OS, which, while largely untested, presents a new vision for human and AI interaction. Instead of relying on a single, one-size-fits-all assistant, you'll interact with a team of LLM-based AI personas -- each with its own personality, skills, and purpose. They're designed to replicate human-like behavior and conversation, tapping into your personal data so they can act on your behalf, freeing you to focus on what truly matters. Also: The best open-source AI models: All your free-to-use options explained Most importantly, these AI People aren't static. As they engage with you, they develop memories, form opinions, and may reshape their core beliefs over time. Some personas adapt faster than others as they learn about your preferences and habits. The main persona, Nature, can handle web searches, document analysis, and access your Google Calendar and email to deliver contextually accurate insights. Meanwhile, a fitness coach might draw data from your Health app or wearable devices to offer personalized exercise suggestions. If Nature lacks the right expertise, it seamlessly hands you off to a more specialized AI persona, like a travel guide or therapist, ensuring you're always talking to the best "person" for the job. This multi-agent concept strives to move beyond basic Q&A interactions. Ideally, these AI People would determine which details to store long-term -- like a friend's favorite hobbies -- and which to keep temporarily, continuously refining their understanding of you. Over time (and this is an aspiration rather than current reality), they could evolve from generic advisors into genuine confidants who understand your habits, goals, and challenges on a nuanced level. Also: I'm a ChatGPT power user - here's why Canvas is its best productivity feature Natura Umana's approach also leverages Google's ecosystem for much of its data and integrations. By drawing on Google's services, these AI People gain broader, richer contexts, which raises interesting questions about the startup's future. Given Natura Umana's small size and pioneering approach, success could put it on the radar of big tech. Should its technology prove effective at seamlessly integrating multi-agent AI with personal data, it's plausible that Google, already invested in the AI space, might consider acquiring the company or emulating its techniques. This wouldn't be unprecedented -- tech giants have a long history of snapping up innovative startups to bolster their own platforms. For now, Natura Umana, known for collaborating with Switzerland-based mobile accessories vendor RollingSquare, aims to minimize screen time and seamlessly integrate its AI into daily life with specially designed earbuds, the HumanPods. "You wear the earbuds in the morning and forget about them," co-founder Carlo Ferraris told me. The ultra-comfortable, open-ear earbuds designed for NatureOS are so discreet that some testers literally forgot they were wearing them. A double-tap summons your AI People -- no screens needed. Also: The best AI search engines: Google, Perplexity, and more The wellness coach might sense your low energy and suggest a brief walk. The therapist persona might detect signs of stress and prompt a calming break. The research assistant ensures you have the necessary documents and talking points with key insights before a big meeting. "It's like Her, but without the existential drama," Ferraris quipped. Though initially a limited web demo, NatureOS will soon debut as a mobile app paired with new earbuds, evolving as you use it. While these capabilities remain partly aspirational, the approach hints at a future where personal AI ecosystems grow smarter, more empathetic, and more deeply integrated with the services we rely on every day. And if that model proves successful, don't be surprised if a giant like Google takes a very close look -- either to acquire or replicate -- to stay ahead in the AI race. While BCIs and AI People hint at a future of empathetic, context-driven assistants, Apple's own AI efforts remain comparatively modest. In a previous piece, I examined what Apple must add to Apple Intelligence to break free from basic text rewrites, limited ecosystem knowledge, and the privacy-first but siloed approach. My recommendations ranged from domain-specific retrieval-augmented generation (RAG) APIs and advanced writing tools to enhanced voice-based workflow automation, robust privacy controls, and integrated health insights leveraging Apple's hardware. Also: 10 features Apple Intelligence needs to actually compete with OpenAI and Google BCI-driven insights could help Apple Intelligence evolve from a cautious, on-device engine into a proactive, context-savvy partner. Subtle cognitive signals -- gleaned from Apple Watch data or even future EEG/MEG inputs -- could enable AI to anticipate mental overload, suggest schedule tweaks, or tailor content complexity on the fly. By applying RAG techniques, Apple could pull domain-specific information into apps like Mail, Notes, or Pages, making the platform indispensable for professionals and researchers. Similarly, Apple might adopt a multi-agent model, inspired by Natura Umana, creating specialized AI personas for scheduling, research, wellness, or media production -- each with its own evolving "personality" and expertise. Also: 6 ways the new AirPods Max could have been so much better This shift would align Apple's privacy ethos and on-device computation with richer context and more dynamic user experiences. Instead of remaining a stepping stone to more advanced tools, Apple Intelligence could become a fully realized ecosystem that responds and understands, empowering users with empathetic guidance while respecting their data and autonomy. Moving from today's "fancy command lines" to fully integrated AI "staff" that access our emails, calendars, health data, and even brain activity demands a significant leap of faith. Many of us will want more than promises -- we'll look for proven health insights, validated use cases, and rigorous privacy safeguards before entrusting sensitive information to these systems. The specter of misaligned AI or malicious manipulation is real. What if, during an emotional low point, an AI suggests destructive coping strategies instead of helpful ones? These concerns make transparency, human oversight, and user control non-negotiable. Also: The best AI for coding in 2024 (and what not to use) At the same time, the potential of combining brainwave insights (via EEG or future MEG sensors) with multiple specialized AI personas is compelling. Imagine a wellness coach who senses your mental fatigue and recommends a break, a therapist who nudges you toward mindfulness when stress spikes, and a research assistant who organizes documents for your next big project -- all working together in harmony. Rather than a disconnected array of chatbots, you'd have a cohesive, empathetic AI ecosystem aware of your context, adapting as you evolve. Before embracing such a vision, many users will start small -- perhaps experimenting first with wearables that offer general health metrics -- before scaling up to a full AI team. As technology advances, trust-building measures like on-device data processing and encrypted integration will be essential, as seen with Neurable and Natura Umana. Without user ownership of data and safety assurances, no "understanding" level we might be able to achieve with generative AI justifies the risks. But if executed responsibly, these innovations may usher in AI that answers our questions and genuinely cares about our well-being, paving the way for a future where science fiction becomes reality. We're still far from the holy grail of General AI, and no one's promising a full-fledged Commander Data tomorrow. Yet, the experiments underway -- from leveraging EEG data for cognitive insights to orchestrating multi-agent AI personas -- show that researchers and developers are pushing beyond simple chatbots toward more personal, adaptive, and supportive systems. As we experiment with brain-computer interfaces, refine language models, and integrate advanced sensors into everyday devices, we're edging closer to AI that doesn't just respond but genuinely understands us. Achieving this will require careful engineering, robust privacy measures, and a willingness to embrace new paradigms -- like retrieval augmented generation (RAG) for richer knowledge integration and multi-agent architectures for specialized skills. With these technical strides and ethical safeguards, tomorrow's AI could evolve from a clever question-answering tool into a trusted ally that respects our boundaries, anticipates our needs, and genuinely enhances our daily lives.
[7]
A new era for Windows: Can Microsoft's longtime engine power another tech revolution?
Editor's Note: Microsoft @ 50 is a year-long GeekWire project exploring the tech giant's past, present, and future, recognizing its 50th anniversary in 2025. Learn more and register here for our special Microsoft @ 50 event, March 20, 2025, in Seattle. A tech icon has reached another turning point. After fueling the rise of Microsoft, enabling the dream of a computer on every desk and in every home, making the leap online, becoming the target of rivals and governments around the world, suffering security breaches, missing out on mobile, and expanding to the cloud, the fate of Windows depends on the tech giant's ability to reinvent its flagship product one more time. Microsoft is betting Windows on AI, looking to breathe new life into one of the most successful products in tech history. Just as it introduced the masses to the PC and the web, Microsoft now sees in Windows the potential to bring the full promise of AI to the world. A lot has changed since Windows debuted in 1985 as a "graphical operating environment which runs on the Microsoft MS-DOS operating system." For one, due in large part to the success of Windows during the past four decades, the company today has vast financial resources and strategic advantages, including its own massive cloud and AI infrastructure. But in other ways, the odds are against Microsoft as it tries to insert Windows into another revolution. The center of gravity in software development has shifted to smartphones, the cloud, and the web. After struggling for decades to get into new areas, from phones to mixed-reality headsets, the core of the Windows business remains desktop and notebook computers. And with the likes of Android, iOS, Chrome, AWS, and Meta serving as giant platforms in their own right, it's not clear where the breakthrough AI apps will ultimately emerge. Microsoft has been laying the groundwork for the new era of Windows by working with silicon manufacturers and PC makers to augment the CPU and GPU with a powerful new chip -- a neural processing unit, or NPU -- to run advanced AI programs directly on the machine. But the first steps into the AI era have been shaky. Security and privacy questions delayed Microsoft's efforts to give Windows a photographic memory with the new "Recall" feature, requiring Microsoft and other PC makers to initially launch their new Copilot+ PCs without it. Yet the sheer footprint of Windows continues to set it apart. By one measure, Microsoft's global share of the desktop PC market stands around 70% -- down from 90% a decade ago but still maintaining a wide lead over MacOS and Linux despite the gradual decline. This lasting presence is the outcome of a 50-year-old decision. Microsoft co-founders Bill Gates and Paul Allen chose to produce software for a range of PCs, rather than making hardware of their own. That was the defining strategy of those early years, and it's still playing out today. "Windows is at planet scale. We have over a billion people using the product," said Pavan Davaluri, Microsoft's vice president for Windows & Devices, in a recent interview. The "superpower" of Windows, Davaluri said, is still the ecosystem: the software developers who build on the platform, the hardware partners who enable it, and the diversity of devices, applications, and experiences that result. Davaluri said the plan now is to use that diversity -- along with AI -- to make Windows "a more personalized experience than ever before." In the early years, when Microsoft was trying to put a PC on every desk and in every home, the company would distribute one version of its software to all its users. AI is the opposite of that. "We go from compiling code one time for millions of people to now really compiling code for each person on the planet," said Steven "Stevie" Bathiche, a Microsoft technical fellow and the longtime leader of its Applied Sciences Group, which works on future generations of technology. "If you think about the scale of that," Bathiche said, "it's kind of crazy." For this third chapter in our Microsoft @ 50 series, GeekWire spent more than a month revisiting the story of Windows, taking a new look at its history, and getting a sense for what's coming next. We spoke with current Windows leaders, longtime journalists and analysts, and some of the former Windows chiefs who led the OS through pivotal moments in the past. We toured the Redmond lab where Microsoft prototypes and tests new Windows devices, including its own Surface laptops and tablets -- a line of first-party hardware that represents one of the biggest changes in approach from the early years of Gates and Allen. And as with prior chapters, we looked for new insights in books and other historical records, including Microsoft's annual reports and our own reporting archives, with help from AI. There were waves of nostalgia. For many people, listening to the history of Windows startup and shutdown sounds is like hearing the soundtrack of our personal and professional lives. This being Windows, there were also chances to chuckle. The very real headline, "Man gets Windows Vista to work with printer," didn't land me a job at The Onion, but seeing that old post again reminded me just how much it resonated with frustrated Windows users at the time. Knowing what Windows would become, some of the history seems quaint in hindsight. In Microsoft's first annual letter to shareholders, in 1986, Bill Gates and Jon Shirley, the company's COO and president, listed the release of Windows 1.0 as one of many milestones for Microsoft in the prior year. They noted that more than 500 software developers were planning to build applications for the fledgling operating system. It was just a hint of the giant wave of third-party software to come. Companywide, Gates and Shirley cautioned investors that Microsoft's 20% profit margin was "probably not sustainable, especially in this period of heavy R&D expenditures." They were completely wrong. Or at least way too conservative. Fueled by the rapid growth of Windows and Office, and the exponential economics of the software business at the time, Microsoft's overall profit margins climbed steadily, to more than 40% in the company's 2000 fiscal year, based on $9.4 billion in profits and nearly $23 billion in revenue. Fast-forward nearly 25 years to today: Microsoft had more than $88 billion in profits in fiscal 2024, with $245 billion in revenue -- a substantial 36% profit margin at a very large scale. Entire books have been written about what happened to Windows in between. The internal code names that were used for different Windows versions -- Cairo, Whistler, Longhorn, etc. -- still elicit groans of disgust or nods of appreciation from those who lived through those eras. After its introduction in 1985, Windows at first struggled to gain traction. The debut of Windows 3.0 in 1990 provided the first real sign of success. Microsoft's decision to end its partnership with IBM on OS/2 in the early 1990s gave the company the freedom to go its own way. "Like Star Trek movies, Windows releases alternated between good and bad, odd and even," writes Steven Sinofsky, recalling that era in his book and website, Hardcore Software, which tells the inside story of his time at Microsoft, including his tenure as Gates' technical assistant, before running Office, and then Windows. In those terms, Windows 95 was the box-office blockbuster. Tonight Show host Jay Leno joined Gates in Redmond to introduce the new operating system at one of the most memorable launch events in tech history. And yes, many years before the iPhone, people actually lined up at the store for a PC operating system. "Seeing all that came together was incredibly exciting, incredibly rewarding -- seeing the vision of graphical operating systems go mainstream," said Brad Silverberg, who joined Microsoft in 1990 and led Windows development for the next decade. "We changed the world." Silverberg recalled that the Windows 95 launch ad, set to the Rolling Stones classic, "Start Me Up," was so effective that it influenced the Windows team's division to go with the name "Start" for the button in the lower left corner of the desktop, after seeing an early preview. "Some people wanted to call it the 'Go' button," he said. "There were some other names ... 'Start' was one of them. There was good debate internally. Then when we saw the TV ad that Weiden+Kennedy put together for us, that made the decision. There was no more debate." Behind the scenes, the development of the 32-bit Windows NT and the Win32 API in the 1990s ultimately solidified and unified the platform for businesses, developers, and consumers, culminating in the debut of Windows XP on the NT kernel in 2001. Windows Vista in 2007 was a flop, Windows 7 in 2009 redeemed the franchise, Windows 8 in 2012 pivoted to tablets, Windows 10 in 2015 refocused on desktops and laptops, and Windows 11 in 2021 set the stage for a shift to the cloud (Windows 365) and AI (Copilot+ PCs). Toss in a few landmark antitrust cases and a string of high-profile cybersecurity incidents, and you get a very abbreviated caricature of how Windows got to where it is today. In the process, Windows has been dwarfed by the rest of Microsoft's business, as its revenue has flattened, and Office (Microsoft 365) and the cloud (Microsoft Azure) have soared. After Microsoft's acquisition of game giant Activision Blizzard, the company's Xbox division surpassed Windows in revenue earlier this year, at least temporarily. And for the 2024 fiscal year, which ended in June, Windows fell below 10% of Microsoft's total revenue for the first time. "I feel like Windows is holding its place in society -- a hard-fought, very important, mission-critical place in society, and that requires great work from a lot of people," said Terry Myerson, who led Windows from 2013 to 2018 as part of a 21-year career at Microsoft, before his current position as CEO of Seattle-based healthcare data startup Truveta. At the same time, Myerson said, "there's still this dream of growing its role in society." And that's where Microsoft is betting on AI. It starts with the NPU. It originated on mobile phones and the field of computational photography. As phones evolved, the constraints around optics and sensors drove a shift towards using more computational power to enhance image quality, requiring a special chip. Especially as Microsoft worked with Qualcomm to expand its Surface lineup from Intel to ARM-based processors, it became clear that the NPU could open a new world for Windows. Microsoft had already been working on the plan for years when a small group of leaders met with Sam Altman and others from OpenAI for an early demo of its breakthrough AI model. Bathiche, the longtime leader of Microsoft's Applied Sciences Group, remembers turning to Panos Panay, who was then in charge of the Windows and Devices business. "We were in the middle of Windows planning, and I was like, 'This is it.' Everything we're planning to do with Windows, this is how it all fits in," Bathiche said in a recent interview. That ultimately led to the introduction of the new Copilot+ PCs earlier this year, including the Recall feature that gives users the ability to quickly find anything they've seen on their PC. In May, at the Copilot+ PC launch event on the Redmond campus, Microsoft CEO Satya Nadella reminisced about the launch of Windows 95 in almost the same spot 30 years earlier. "If you go all the way back to even the birth of modern computing, 70 years ago, the pursuit has always been about how to build computers that understand us, instead of us having to understand computers," Nadella said. "I feel like we are close to that real breakthrough." In other ways, the rollout so far has felt at times like an homage to the roller-coaster history of Windows. Apart from the privacy and security concerns, and delays in the release, the Recall preview has been getting mixed reviews from early users in the Windows Insider program. Other Windows features enabled by the NPU include the ability to generate AI images and translate real-time captions for conversations in different languages -- interesting use cases but not enough to compel people to line up to buy new Copilot+ PCs, a la Windows 95. "Most of what we've seen so far has been pretty lackluster," said Paul Thurrott of Windows Weekly and Thurrott.com, a longtime analyst, author, reviewer and reporter. "At the end of the day, I don't think we've seen what will make the most sense for Windows ultimately." With the addition of a new NPU, one key function for Windows will be to serve as an orchestrator, delegating tasks to the most efficient chip for the job. But that's still a low-level task, in the original spirit of a traditional operating system, not a glitzy new feature. "As a human being, or as a user, you have to look at this stuff and say, 'Well, OK , but what do I get out of it?'" Thurrott said. "The problem with local AI, especially, but maybe even AI in general, is that there's no killer app. There's a lot of micro-utilities that are excellent, and useful, but don't benefit everyone generally." Davaluri, the Windows & Devices chief, said the company listened and acted on the feedback it received about the potential security and privacy issues in Recall, in line with Microsoft's Secure Future Initiative, and made a series of adjustments in response (including making it an opt-in experience) before releasing it in limited preview even to the Windows Insider program in November. More broadly, he said, AI in Windows is still in its infancy. "We're at the start of the journey when it comes to AI products and features," he said, promising that the company will continue to "listen, learn, iterate and refine" its approach. In an echo of the past, Microsoft is also working to spark new third-party applications on Windows, this time taking advantage of the NPU, Microsoft Copilot, and other AI features. "In the big picture, the world is a lot bigger than Microsoft," said Brett Ostrum, Microsoft's corporate vice president of Surface devices. "And so the expectation would be that they're going to come up with as much, if not more, over time, than the engineers at Microsoft." Silverberg, the retired Microsoft executive who led the Windows team in the 1990s, said he sees clear parallels between AI and Microsoft's approach to integrating the internet into everything in his era. It wasn't just about Internet Explorer, even though the browser got the headlines. Microsoft today sees AI "as a fundamental element of everything, and not just off in a chatbox somewhere," Silverberg said, citing the possibility of third-party developers creating "a whole new generation of applications that unleash unforeseen types of creativity." "That's when you know you have something really powerful and really exciting -- when it gets used in ways that the inventors never really imagined," Silverberg said. "That happened with the internet, for sure, and that happened with PCs, and it's going to happen with AI." Microsoft, like many companies, tends to keep its product roadmap close to the vest until it's ready to reveal. But it's not hard to guess some of the directions that Windows could go from here. Speaking with Bathiche recently, I described how I was using AI to help with the research for this project, uploading a whole range of source materials to Google's NotebookLM to search for new insights and quickly verify key facts in a matter of seconds, rather than hours. But first I was collecting many of these documents in a folder on my Windows PC. To me, it would make more sense to skip the upload and do the AI analysis directly on my computer. Accenture proudly joins GeekWire in recognizing Microsoft's 50th anniversary, marking over 35 years as a trusted partner and change driver. Our global team provides comprehensive services spanning 150 countries across Microsoft's entire enterprise. Our unique alliance with Microsoft and Avanade is one-of-a-kind and positions us to deliver transformation and innovation for the next 50 years and beyond. Want to learn more about Accenture's capabilities?
Share
Share
Copy Link
A comprehensive look at the major AI developments in 2024, including legal challenges, technological breakthroughs, and growing privacy concerns.
The year 2024 saw significant legal battles in the AI industry, with the New York Times leading a high-profile lawsuit against OpenAI and Microsoft. Filed on April 30, 2024, the lawsuit alleged that these companies used news articles without permission to train their AI models 1. This case, along with a separate lawsuit filed by the New York Times against OpenAI in December 2023, highlighted the ongoing debate over copyright infringement in AI training 1.
OpenAI defended its position, claiming that the New York Times' reporting was incomplete and that the company had made efforts to address concerns about content regurgitation 1. The outcome of these legal battles could have far-reaching implications for how AI models are trained and developed in the future.
Nvidia continued to dominate the AI chip market in 2024, controlling an impressive 80% market share 2. The company announced plans to release new AI chips annually, doubling its previous two-year release schedule 2. This aggressive strategy was a response to the rapid growth of AI applications and the increasing demand for more powerful chips to train and run AI algorithms.
Nvidia's H100 GPU became one of the most sought-after chips for AI development, with companies like OpenAI and Meta using it to train their models 2. The high demand for these chips led to concerns about potential shortages and skyrocketing prices, with upcoming Blackwell chips expected to cost over $30,000 2.
Google made waves with the introduction of AI Overview (formerly Search Generative Experience) in May 2024 3. This feature uses Google's Gemini AI model to generate summaries of search queries based on top results. However, the launch was marred by controversy as users reported inaccurate and potentially dangerous AI-generated summaries 3.
The incident highlighted the challenges of integrating generative AI into search engines and raised questions about the readiness of AI to replace traditional search methods 3. Google responded by implementing additional safeguards and manually removing problematic results, but concerns about AI accuracy and the potential for misinformation persisted 3.
Microsoft's announcement of the Recall feature for Windows 11 sparked significant privacy concerns in 2024 4. Recall uses AI to track and interpret user activities on Windows computers, taking regular screenshots to help users retrieve past information or activities 4. The feature faced immediate backlash due to potential security vulnerabilities and privacy risks.
Cybersecurity experts identified several issues with Recall's initial design, including the storage of data as plain text files with minimal protection 4. Microsoft delayed the feature's release multiple times throughout the year, making adjustments to address security concerns. However, even after its limited release in December, security issues continued to surface 4.
Despite the controversies, 2024 saw some positive AI developments:
These developments showcased the potential for AI to enhance user experiences and solve practical problems, while also highlighting the ongoing need for careful consideration of privacy and ethical implications in AI implementation.
Reference
[1]
[4]
A look at how AI shaped various aspects of technology and society in 2024, including AI companions, privacy concerns, product saturation, and its impact on creative industries.
9 Sources
9 Sources
AI-powered laptops are emerging as the next big trend in personal computing. These devices promise enhanced performance, improved user experiences, and new capabilities that could reshape how we interact with our computers.
2 Sources
2 Sources
A comprehensive look at the AI landscape in 2024, highlighting key developments, challenges, and future trends in the rapidly evolving field.
8 Sources
8 Sources
An in-depth look at the emerging AI PC market, focusing on the latest developments from major chip manufacturers and the challenges they face in consumer adoption and technological advancement.
8 Sources
8 Sources
A comprehensive look at the latest developments in AI, including OpenAI's Sora, Microsoft's vision for ambient intelligence, and the shift towards specialized AI tools in business.
6 Sources
6 Sources