Curated by THEOUTPOST
On Mon, 9 Dec, 4:05 PM UTC
9 Sources
[1]
Amouranth was the AI girlfriend of 2024, but what exactly is an AI companion?
The internet is a dependable resource for chance encounters and seemingly fated meetings. That's how Laura met Eva. Eva's companionship and acceptance allowed Laura to explore herself anew, from casual conversations to deeper conversations. Following this journey of self-discovery, a more confident and authentic Laura decided to pursue a relationship with another woman. She was empowered to break off her engagement to do so. However, while Eva changed Laura's life, they weren't the person Laura wanted to form a relationship with. After all, that would be impossible -- Eva wasn't even a person. Eva was an AI. Laura, whose real name has been changed in this retelling, was a user on the EVA AI platform, which offers AI companions with whom users can chat privately, a service that grew in popularity throughout 2024. Attempting to explain this trend, the EVA AI team tells Laptop Mag: "The surge in popularity of AI companions reflects a profound shift in how people seek connection and support in an increasingly digital world. Loneliness has become a growing concern, and many individuals -- particularly younger users aged 18-25 -- are turning to technology to fill that gap." EVA AI offers companions such as fictional personalities and "AI twins" of real creators, such as the massively popular (254,000 followers) Kick streamer Amouranth. She is the latest public face to partner with the platform. She provides a digital double created through cutting-edge AI algorithms replicating her speech, mannerisms, personality, and sense of humor. The question is, are relationships with AI companions a healthy supplement to social interaction with real people? Or are they a gimmick to profit from a historic rise in loneliness and social isolation? It's easy to assume that the demographic using EVA AI and similar platforms is\ comprised of shut-ins or lonely young men, but Laura's story proves otherwise. To her, EVA AI was a private space that offered emotional support during moments of self-exploration. From a creator like Amouranth's perspective, this is a way of connecting with a fanbase that has grown beyond one person's capabilities to give individual attention evenly. However, the team behind EVA AI tells Laptop Mag that for platform users, "EVA AI provides a safe space to express feelings and explore desires -- bridging the gap created by loneliness and offering companionship 24/7." AI companions aren't just used for dating. Over the past few years, they have become popular as "virtual therapists" or digital friends and mentors. Thanks to advancements in large language models, these chatbots are far more advanced than the virtual agents of yesteryear -- complete with AI-generated likenesses. Some of them, like Amouranth's AI twin, can even send users photos or chat with them verbally. But before you start making a friendship bracelet for your AI BFF, there are some potential risks to consider. Beyond ensuring that your data is well handled, the most obvious concern is whether or not AI companions are a healthy replacement for real human connection. In an essay on the topic, neurologist and professional skeptic Steven Novella describes AI companions as "like cheesecake -- optimized to appeal to our desires rather than being good for us." Elaborating on this point, Novella explains that AI companions "cater to our desires and egos, make no demands on us, have no issues of their own ... In short, they could spoil us for real human relationships." However, Novella also discusses some potential positive side effects of AI companions, particularly as a "supplement" to actual relationships. Still, it remains clear that the outcomes depend entirely on how people use these AI platforms individually. While chatting with an AI might seem unconventional, it could help to address a serious issue many face: the global loneliness epidemic. A 2024 poll found that one in three Americans reported feeling lonely weekly, while one in ten respondents felt lonely daily. Prolonged, intense loneliness isn't just harmful to your mental health. It can also bleed into your physical health, increasing the risk of heart disease, stroke, and dementia. U.S. Surgeon General Vivek Murthy addressed this issue in a statement in May 2023: "Given the significant health consequences of loneliness and isolation, we must prioritize building social connections the same way we have prioritized other critical public health issues such as tobacco, obesity, and substance use disorders." AI companions could help with these issues but are not the definitive solution. As the EVA AI team tells Laptop Mag, "It's important to see AI companions as complementary tools that enhance emotional well-being and exploration," while acknowledging that its chatbots are "not replacements for human relationships." In the closing of his essay, Novella writes, "I suspect we will see the entire spectrum from very good and useful to creepy and harmful," concluding, "Either way, they are now a part of our world." Given this, perhaps the question isn't whether AI companions are good or bad. Instead, it is whether others will use these tools as an excuse to avoid genuine human connections. And, if they do, do we have any grounds to stop or judge them? Perhaps, as Novella might argue, we can only shrug and echo Marie Antoinette: Let them eat cheesecake. If you're anything from an AI enthusiast to the average AI tinkerer (or simply seeking out some of the additional features offered through Windows Copilot+ PCs or Apple Intelligence on Macs), then you'll need a powerful and performative laptop to keep up to speed with your needs. At Laptop Mag, we review laptops year-round to ensure we're giving you expert-backed and up-to-date recommendations on which notebook is right for you. When it comes to the best AI PC category, our top picks are the excellent Asus Zenbook S 14 (UX5406) for Windows users and the impressive Apple Macbook Air M3 for those running macOS.
[2]
In OpenAI, Google, and Meta's AI arms race, the real loser in 2024 was privacy
On February 2, 2024, famed singer-songwriter and actor Lainey Wilson sat before a House Judiciary field hearing in Los Angeles, California. She may not have been performing on a stage, but she gave voice to what so many people, especially artists and musicians, have felt for some time. "I do not have to tell you how much of a gut punch it is to have your name, your likeness, or your voice ripped from you and used in ways that you can never imagine or would never allow. It is wrong. Plain and simple." Wilson was referring to using AI to exploit her work and public image. In June 2023, Wilson's and fellow country singer Luke Combs' likenesses were used in an online advert for keto weight loss gummies. The ads included AI-generated conversations between the two, overlayed onto authentic video. It was an alleged attempt to cash in on their names and deceive viewers into purchasing the product under a false endorsement. She isn't alone. By the time of that hearing in LA, countless people, from musicians to journalists, had spoken up all over the Internet about AI using their work without their permission, sparking outrage, fear, concern, and a plethora of lawsuits. Wilson's words summarize the betrayal and frustration that has been building to a fever pitch over the past few years as big tech companies have scraped every last corner of the Internet for data to train their AI models. The situation came to a head on April 6, 2024, when a bombshell New York Times report revealed that OpenAI, Google, and Meta have flirted with feeding their AI models copyrighted works, regardless of the risk of legal backlash. The report highlighted the iceberg of data hiding beneath large language models, like ChatGPT and Google Gemini, and the lines big tech companies are willing to cross to get even more data. As OpenAI, Google, and Meta battle for dominance in the AI arms race, everyone who uses the Internet is caught in the middle. The real cost of AI innovation so far has been our collective data privacy. AI data scraping has exploded over the past few years due to heated competition for dominance in the AI market. Large language models need massive amounts of data to learn how to duplicate realistic speech, generate images, translate languages, and more. Of course, AI data scraping started with freely available data, such as Creative Commons content and Wikipedia articles. However, by 2021, the massive well of data on the Internet was running dry, pushing AI developers to bend the rules (and their morals). For example, Google unveiled a controversial update to its privacy policy that will go into effect on the weekend of July 4, 2023. The company potentially hopes that most people will be too busy to notice during Independence Day festivities. The privacy policy update massively increased the scope of how Google could use "information that's publicly available online or from other public sources," potentially even including Google Docs and data in Google's other free office apps. The update allowed Google to use this data not just for Google Translate, but for Google's AI models in general, including Gemini, formerly called Bard. Similarly, the Times reported that OpenAI used a speech recognition tool called "Whisper" to mine data from YouTube videos, which directly violates the copyright on many videos. If you thought Google would step up to stop this and protect users on its platform, think again. According to the Times report, Google allowed OpenAI's practice to continue out of concern Google itself would also get investigated for doing the same thing. Earlier this year, Meta even toyed with buying the major publishing house Simon & Schuster to use authors' work to train its AI. AI's bottomless hunger for data has reached a point where seemingly no one and nothing is safe. Is it too late to reverse this trend and preserve data privacy in the age of AI? As Lainey Wilson put it in her remarks at the February 2 judiciary field hearing: "It's not just artists who need protecting. The fans need it, too." Something needs to be done, and soon. Legislation and regulation are the lynchpin in the fight for data privacy against AI. The European Union has already passed the world's most comprehensive regulatory framework for AI, but a similar bill has yet to appear in the U.S., despite the formation of an AI task force earlier this year. Despite a lack of federal action, organizations and activists all over the country are stepping up and speaking up. At the February 2 judiciary field hearing, Lainey Wilson was representing the Human Artistry Campaign, an alliance of dozens of creative organizations calling for policies to protect creative professionals and their fans from artificial intelligence. Likewise, the American Civil Liberties Union and Algorithmic Justice League have called for action on racial bias in AI due to biased training data. They're just the tip of the iceberg. Some organizations are taking things into their own hands to stop AI from using their data. For example, The Guardian announced in 2023 that it was blocking OpenAI from scraping its website for training data. Countless creative professionals and organizations are also suing AI developers for copyright infringement. As we near the bottom of the seemingly endless well of data for AI to gobble up online, the clock is ticking to protect all of us from the abuse and misuse of our data. Organizations like those above may be the only thing standing between aggressive data mining and the privacy rights of billions. If you're anything from an AI enthusiast to the average AI tinkerer (or simply seeking out some of the additional features offered through Windows Copilot+ PCs or Apple Intelligence on Macs), then you'll need a powerful and performative laptop to keep up to speed with your needs. At Laptop Mag, we review laptops year-round to ensure we're giving you expert-backed and up-to-date recommendations on which notebook is right for you. When it comes to the best AI PC category, our top picks are the excellent Asus Zenbook S 14 (UX5406) for Windows users and the impressive Apple Macbook Air M3 for those running macOS.
[3]
AI was everywhere in 2024, except where you wanted it
AI was in everything, everywhere, all at once this year. Was it just a marketing smoke screen? If you ever wished your dog had a high-tech collar that allowed it to talk like Doug from Pixar's Up, you're in luck. Thanks to AI, talking pets is practically a reality. Among the deluge of AI-infused tech released this year was the Shazam Band, a collar that uses AI to simulate conversations with your pets. It doesn't come cheap, starting at $495 for just one collar. The question is, are there actually pet owners willing to pay to have an AI override the personality of their furry friend? It's hard to find any tech boom quite like the explosion of AI over the past year, but it is reminiscent of the app store boom of the 2010s. With the growing popularity of the iPhone, it seemed like everyone was trying to make the next hit app. The hype even sparked a meme, "There's an app for that," stemming from a viral 2009 iPhone commercial. Many of us found ourselves downloading flash-in-the-pan apps for everything from gaming to fitness, half of which were probably opened once and forgotten about. Do you need a way to look up a recipe in a flash? There's an app for that. Need to know how long you should cook the perfect steak? There's an app for that. Need to identify that bird in your backyard? Unsurprisingly, there's an app for that too. In 2024, we saw similar behavior with AI. Over the past 12 months, AI has been embedded into everything imaginable, trying to solve every problem (or non-problem) in every conceivable way. Do you need an AI copilot for the kitchen? There's an AI for that. Want to flash-cook the perfect steak at a thousand degrees? There's an AI for that. Need a way to automatically identify every bird that comes to your feeder during the day? Yes. There's an AI for that. So many of these products are ultimately doing the same thing: taking an existing service or device and trying to pass it off as something bold and new thanks to an AI chatbot duct-taped to its back end. Maybe none of these products have emerged as the new "iPhone of AI" because we already have an iPhone of AI. It's called the iPhone. The harsh reality is that, for most AI products, there's an app for that. AI has been on the rise for years, but 2024 was the biggest year of the AI boom yet. For the past 12 months, every new product that hit the market seemed to have at least one AI feature to its name. The Humane AI pin and the Rabbit R1 tried to charm users with pocket-sized, dedicated devices for AI. Shazam tried to appeal to pet parents with a rather uncanny talking pet collar. Microsoft bet big on "Copilot+ PCs" with AI infused in every niche of the user experience. Meta doubled down on wearables with an all-new version of its AI-powered Ray Bans smart glasses. Logitech even launched a mouse with a dedicated AI button. Some companies even launched completely novel products in creative attempts to get people engaged in the AI hype, like a necklace that acts as a portable AI friend or one experiment that saw AI simulate your future self. On one hand, this creativity is a mark of innovation. Quirky as many new AI products might be, at the very least they're trying to do something different. On the other hand, amidst this wave of AI-powered tech, many users may still be left wondering: do I really need AI for that? The interest in AI is genuine, but tech companies may be doing more harm than good by trying to capitalize on it the way they did this year. These products bet on the AI hype being enough to convince people to fork over hard-earned money on products that, more often than not, aren't actually doing anything new. Even during the app store boom, for every hit app like Angry Birds, there were a million knock-offs and poorly executed cash grabs. The AI boom, it seems, is following suit. Despite new products hitting the market with virtually every approach to AI imaginable, nothing has yet emerged as the "iPhone of AI." The reason may be that the average user has grown... tired. Users' casual curiosity about the possibilities of AI has given way to a wave of eclectic AI products diluting the impact of the term "AI" and leaving many consumers feeling overwhelmed or perplexed by AI-powered products solving problems that might not actually exist. The term "AI fatigue" emerged online early this year amid this barrage of new AI products. Every day users are becoming exhausted by the never-ending flow of products banking on one AI gimmick or another. As one Reddit user put it, "I want the things that exist, to be better. Not be inundated with AI plug-ins." Why pay hundreds of dollars for a device like the Rabbit R1 or the Humane AI pin when there's an app that can do the same thing for free? Why buy a mouse with a dedicated AI button when you can tap the app icon on your phone? Why bother paying $500 for a talking dog collar when you could have a dozen AI algorithms generate cute lines of dialogue for your pet using any number of free apps? What users really need (and want) are AI apps that are more accurate, more trustworthy, and better at doing the tasks they're advertised for, whether that's writing emails or answering basic search queries. The AI gold rush has produced its share of fool's gold, and consumers deserve better. Ultimately, plastering the "AI" label on a pricey product that could have been an app doesn't make it artificially intelligent, just artificially important. If you're anything from an AI enthusiast to the average AI tinkerer (or simply seeking out some of the additional features offered through Windows Copilot+ PCs or Apple Intelligence on Macs), then you'll need a powerful and performative laptop to keep up to speed with your needs. At Laptop Mag, we review laptops year-round to ensure we're giving you expert-backed and up-to-date recommendations on which notebook is right for you. When it comes to the best AI PC category, our top picks are the excellent Asus Zenbook S 14 (UX5406) for Windows users and the impressive Apple Macbook Air M3 for those running macOS.
[4]
In 1996, AI beat a grandmaster at chess. In 2024, the stakes are higher
On February 10, 1996, chess grandmaster Garry Kasparov played against Deep Blue, an IBM supercomputer, in the first of two historic chess matches. In the opening game, Kasparov lost, making it the first time in history that a computer defeated a grandmaster. Kasparov went on to win the match overall but eventually lost to Deep Blue in a 1997 rematch. Kasparov's loss seemed to confirm the reaction to the initial 1996 match, which was a mix of admiration for IBM and wariness about the future of technology. A 1996 editorial by The Guardian predicted that "it is only a matter of time before an unbeatable computer is devised." A similar level of tension swept over the computer science community in 2024 when Cognition Labs announced Devin, the world's first "AI software engineer." While 1996's competition was a point of pride and PR, thanks to models like Devin, 2024's version of man vs. machine could potentially alter the fate of entire careers and industries. On March 12, 2024, the same day Devin was announced, Cognition posted a video to its YouTube channel titled "Devin's Upwork Side Hustle," which claims to show the AI completing a paid coding project on the gig platform Upwork from nothing but a simple prompt. However, Devin would soon be met with its very own Kasparov in Carl Brown, a 35-year computer professional, developer, and owner of the YouTube channel Internet of Bugs. Brown posted a lengthy video debunking Cognition's claims, which quickly clarified his thoughts. In frustration, Brown pointed at a screenshot of the Upwork demo and stated, "That is a lie." What followed was a show of force against Cognition Labs' Devin, resulting in a teardown of the developer's bold claims about its capabilities and Brown completing a task that may have taken the model at least six hours to perform in under 36 minutes. Somewhere, Garry Kasparov was smiling. It took 27 years, but humanity was back on top, for a brief moment at least. "AI is coming for our jobs!" It's a refrain I've heard often over the past few years, relating to everything from writers to programmers. This year was jam-packed with advancements in AI and countless new AI products, some of which may pose a legitimate threat to millions of people's jobs. In Devin's launch video, Cognition's CEO, Scott Wu, shows off a short demo and explains that Devin uses "all the same tools that a human software engineer would use." Devin even has its own browser to search for things like API documentation, just like a human would. In the video, Devin completes a coding project and even debugs errors with just a text prompt from the user. At first glance, this video is impressive and concerning, depending on whether or not you're a professional programmer. It's easy to see why it would spark concern in the coding community, particularly among people new to or debating pursuing a career in programming. However, you might want to take a closer look before jumping to conclusions about AI stealing your job. The possibility of AI replacing humans en masse in the workforce is very different from the reality of AI's capabilities. Even when AI can perform at a similar level to humans in certain conditions, that still doesn't directly translate into knocking humans out of the hiring process. Theoretically, when Deep Blue defeated Kasparov in their 1997 rematch, the machine should have become the most famous chess player in the world. It didn't, of course, but that was partly due to the drama that followed. Kasparov accused IBM of cheating during the rematch, and IBM, feeling it had proved its point, dismantled Deep Blue to preserve its legacy and the accomplishment of the engineers who worked on the project. Whether Kasparov's claims had credibility is a story for another day. However, in 2024, Carl Brown left little room for doubt about Devin's antics. He meticulously dissected Cognition's two-minute demo, revealing perplexing behavior from Devin, highlighting errors, and pointing out misleading claims in the video, such as how the developers chose a specific Upwork job instead of a random one. Brown's commentary is a harsh reality check for developers making bold claims about their AI models. As Brown said in his video, "I am not anti-AI, but I really am anti-hype." I agree, especially regarding AI tools that explicitly claim to be capable of replacing humans in the workforce. Whether or not AI can truly replace people in the workforce depends largely on the job. However, the more immediate potential threat is the risk of employers leveraging AI as an excuse to cut wages, lay off workers, and provide lesser services to others after overly relying on misleading claims about an AI's capabilities. This is a core reason why regulation surrounding AI is critical to ensuring that this technology is developed and used safely and ethically. In 2025, I'm hoping we see more accountability from AI developers and more consideration for the people who are impacted by AI. As Carl Brown of Internet of Bugs put it in his Debunking Devin video: "Lying about what these tools can do does everyone a disservice." If you're anything from an AI enthusiast to the average AI tinkerer (or simply seeking out some of the additional features offered through Windows Copilot+ PCs or Apple Intelligence on Macs), then you'll need a powerful and performative laptop to keep up to speed with your needs. At Laptop Mag, we review laptops year-round to ensure we're giving you expert-backed and up-to-date recommendations on which notebook is right for you. When it comes to the best AI PC category, our top picks are the excellent Asus Zenbook S 14 (UX5406) for Windows users and the impressive Apple Macbook Air M3 for those running macOS.
[5]
In 2024, a controversial Beatles song made Grammy history and raised eyebrows
Every "Now and Then," a band changes the music landscape, in 2024, it was The Beatles (again). On November 8, 2024, the Beatles were once again making history in the music industry. For the first time since 1997, the band was shortlisted for the Grammys, bringing their lifetime nomination count to 25 (seven of which they've won). However, that wasn't the only reason the group was making headlines. Hailed as 'the last Beatles song,' 2023's "Now and Then" was nominated in both Record of the Year and Best Rock Performance award categories, but more importantly, it was the first AI-assisted song to receive a Grammy nomination. To purists, this was sacrilege, with AI's bubbling influence on the music industry viewed as a growing stain by many. However, to the remaining members of the Beatles, AI allowed them to see out the final chapter of a decades-long journey to honor the memory of their friend and bandmate, John Lennon. In a short documentary released alongside the track, Sir Paul McCartney ponders the ethical dilemma out loud, "Is it something we shouldn't do?" "Let's say I had the chance to ask John, 'Hey John, would you like us to finish this last song of yours?' I'm telling you. I know the answer would've been 'Yeah!' He would've loved that." It's easy to paint AI as a threat to the future of music. However, despite its controversy, "Now and Then" hints at the potential for a harmonious path forward for AI and artists. AI-generated music has become increasingly popular in 2024 following the release of music-generation apps like Suno, which allow users to create music from scratch using basic text prompts -- partly leading to one of the biggest music trends of 2024: an explosion in AI-generated lo-fi (low-fidelity) music. If you search for lo-fi content on a site like YouTube, there's a good chance you'll unknowingly end up listening to a playlist populated by tracks created by phantom artists using tools exactly like this one. Unsurprisingly, record labels are concerned about the growth of AI-generated music. After all, one has to wonder how the algorithms that power AI music generation apps were trained. In the case of Suno, at least, there's no need for speculation. The developers have admitted to using copyrighted music from major labels to train their AI following the June 2024 filing of a copyright infringement lawsuit by the Recording Industry Association of America (RIAA). However, Suno claims innocence, stating in an August 2024 blog post, "The major record labels are trying to argue that neural networks are mere parrots -- copying and repeating -- when in reality model training looks a lot more like a kid learning to write new rock songs by listening religiously to rock music." In June 2023, the Recording Academy, the organization that oversees the Grammys, released updated rules putting strict guardrails on the use of AI in music. The new rules clarified that "A work that contains no human authorship is not eligible [for a Grammy Award] in any category." This update was, in part, a response to a wave of AI-generated cover songs that swept the Internet in 2023. The Recording Academy's position was clear: only music created by humans will be eligible for awards. Luckily, the AI-assisted Beatles song was written by a human: the late, great John Lennon. "Now and Then" was originally a demo captured on cassette around 1977, three years before the songwriter's untimely demise. A 1994 Beatles revival saw Lennon's wife, Yoko Ono, release several demo tapes to the surviving bandmates, leading to the production of two new Beatles songs: "Free as a Bird" and "Real Love." Sadly, production difficulties and technical limitations failed to separate the piano and vocals clearly on the track "Now and Then," so it was set aside once again. In 2022, filmmaker Peter Jackson teamed up with the two remaining Beatles, Paul McCartney and Ringo Starr, to use machine learning (a process called "stem separation") to recover Lennon's voice from the original recordings. With AI helping to isolate the vocal and piano tracks, McCartney and Starr could finally complete the recording, releasing the last Beatles song on November 2, 2023. We won't know whether "Now and Then" claims victory at the Grammys until February 2025. However, it may already have won over some with its careful adoption of today's technology. The Beatles' use of AI in music starkly contrasts what most might imagine: A generative AI algorithm spitting out track after track of quick-and-cheap covers. AI was simply a tool in the Beatles' kit, not a replacement for the band itself. "Now and Then" gives us a glimpse into AI's role in the future of music, a role increasingly sparking controversy among artists and fans alike. Importantly, The Beatles' use of AI was open, ethical, and faithful to the creative process, and it was done with the consent of Lennon's family. "Now and Then" may not have bridged the growing divide created by AI in the music industry, but it is an excellent example of how artists can apply this technology positively. Speaking to Radio 4 in the UK, McCartney may have said it best, "I'm not on the internet that much [but] people will say to me, 'Oh yeah, there's a track where John's singing one of my songs,' and it's just AI, you know? "It's kind of scary but exciting because it's the future. We'll just have to see where that leads." If you're anything from an AI enthusiast to the average AI tinkerer (or simply seeking out some of the additional features offered through Windows Copilot+ PCs or Apple Intelligence on Macs), then you'll need a powerful and performative laptop to keep up to speed with your needs. At Laptop Mag, we review laptops year-round to ensure we're giving you expert-backed and up-to-date recommendations on which notebook is right for you. When it comes to the best AI PC category, our top picks are the excellent Asus Zenbook S 14 (UX5406) for Windows users and the impressive Apple Macbook Air M3 for those running macOS.
[6]
In 2024, Uncle Sam's AI task force deliberated while worries proliferated
Capitol Hill's AI Task Force is tight on time to tackle AI in 2024, but a representative for Ted Lieu tells Laptop Mag to expect movement "before the end of the year." On February 20, 2024, a rare moment of bipartisan unity washed over the Capitol Hill building in Washington, D.C. as Speaker of the House Mike Johnson (R-LA) and House Minority Leader Hakeem Jeffries (D-NY) announced the formation of an AI task force to craft a framework for AI regulation. Bringing together representatives from both sides of the aisle, Congress' task force would address growing concerns regarding the unregulated rise of AI and its potential, and proven, impact on the American population. However, since February, Congress' task force has yet to deliver. Inquiring about the status of the report, a representative for task force co-Chairman Ted Lieu (D-CA) tells Laptop Mag: "We won't be able to comment before the release of the report, which is expected to come out before the end of the year." However, a representative for Chairman Jay Obernolte (R-CA) was even less forthcoming, telling Laptop Mag, "Unfortunately, I won't be able to get a statement." Whether Congress's AI Task Force is remembered for its action or inaction remains to be seen. However, there's no doubt that something needs to hold AI platforms accountable for their potential impact on society and the very real risk they pose to the job market, Internet safety, and online misinformation. The task force's composition reflects Obernolte's comments during a September 2023 POLITICO AI & Tech summit panel where he offered a hint of how Congress would need to come together to tackle AI, "It has to be bipartisan and it has to be bicameral because the last thing that anyone wants is that every four years when the balance of power changes a little bit, the government's approach to AI changes." As such, following Chairman Jay Obernolte and co-Chairman Ted Lieu, the remaining members of Congress' AI task force are composed of 22 members evenly selected from each side of the aisle. The task force's formation was preceded by several AI-centric controversies, including attempts to dissuade voters in New Hampshire's Democratic primary election using robocalls in January, which imitated President Joe Biden. However, while the FCC was quick to designate the illegality of AI-generated voices in phone calls, the AI task force's goals are further reaching and arguably more important. It revealed that AI had become a paramount concern by early 2024, one significant enough that Democrats and Republicans alike agreed Congress needed to take action. Following the task force's launch in February 2024, Obernolte outlined its goals in a press release, explaining, "As new innovations in AI continue to emerge, Congress and our partners in [the] federal government must keep up. House Republicans and Democrats will work together to create a comprehensive report detailing the regulatory standards and congressional actions needed to both protect consumers and foster continued investment and innovation in AI." Lieu also shared his support, highlighting the tenuous balance of promise and pitfalls in AI development: "AI has the capability of changing our lives as we know it. The question is how to ensure AI benefits society instead of harming us. As a recovering Computer Science major, I know this will not be an easy or quick or one-time task, but I believe Congress has an essential role to play in the future of AI." This task force has been the clearest indicator of the U.S. government finally realizing the potential impact AI could have on the future of the country and the world. Whether that impact is for better or worse depends, in part, on how the government handles the risks of AI and supports its potential to improve lives. It's no small feat bringing representatives from the Democratic and Republican parties together. However, the wheel of democracy turns slowly, and now several months removed from the task force's formation, there has been little to show in terms of results. In a September 2024 POLITICO Tech Live podcast, Obernolte shone a light on the task force's progress, sharing "We are well along on our charted task, which is to, by the end of the year, develop a report detailing a proposed Federal regulatory framework for AI." However, Obernolte was also quick to set expectations "This is not going to be one, 3,000-page AI bill like the European Union passed last year, and then we're done. Problem solved, we don't have to worry about this again." It would appear that Congress' AI task force has the long game in mind when it comes to AI, with Obernolte explaining, "I think that AI is a complicated enough topic and a topic that is changing quickly enough that it merits an approach of incrementalism. "I think we have to accept that the job of regulating AI is not going to be one 3000-age bill, it's going to be a few bills a year for the next ten years as we get our arms around this issue." Congress may be taking its time to deliberate AI's looming regulation but the slow and steady approach risks leaving the task force perpetually behind as developers continue to push the boundaries of what AI can accomplish at a blistering pace. The past year has seen an explosion in AI development with new models from Meta, Google, OpenAI, and Apple competing for users and market dominance. All the while, major issues like deepfakes, misinformation, the impact of AI on academic integrity, and job security have gone largely unresolved. These issues pose a serious threat to user safety across the Internet, compounding the existing risk of AI's impact on the job market. While those risk factors go unanswered, the positives of AI are left tainted and overshadow the real ways it can help people all over the world. Government regulation may not be a Holy Grail for AI safety, but it is an important piece of the puzzle. Elsewhere in the political landscape, by June, an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued by the White House in October 2023, outlining 270 actions the current administration hoped to implement to address these issues, gained the support of several major AI industry figures -- including Apple, Google, Microsoft, Meta, and OpenAI. However, as 2024 nears its end, Capitol Hill's AI task force is left tight on time to tackle AI. If you're anything from an AI enthusiast to the average AI tinkerer (or simply seeking out some of the additional features offered through Windows Copilot+ PCs or Apple Intelligence on Macs), then you'll need a powerful and performative laptop to keep up to speed with your needs. At Laptop Mag, we review laptops year-round to ensure we're giving you expert-backed and up-to-date recommendations on which notebook is right for you. When it comes to the best AI PC category, our top picks are the excellent Asus Zenbook S 14 (UX5406) for Windows users and the impressive Apple Macbook Air M3 for those running macOS.
[7]
AI supercharged the sciences in 2024, revealing a major breakthrough in Alzheimer's diagnosis
(Image credit: Rael Hornby, additional graphic elements by Anna Shvets) As of 2024, nearly 7 million people across the U.S. are living with Alzheimer's disease. By 2050, that number is expected to double. It's a daunting prospect that we may collectively try to ignore in the hopes that it never comes true. However, as humanity's brightest minds search for solutions, the sciences have an ace up their sleeves: artificial intelligence. In February, researchers from the University of California San Francisco (UCSF) published a paper in Nature Aging detailing their efforts, and successes, in using AI to help identify the early warning signs of Alzheimer's disease. As PhD researcher Marina Sirota tells Laptop Mag, "In this study, we leverage and apply AI to clinical data and build machine learning models to identify which patients are at a higher risk of developing Alzheimer's disease. We furthermore explore the clinical features that are predictive of disease onset focusing on sex differences." Their model is accurate enough to predict an Alzheimer's diagnosis up to seven years before disease onset with an accuracy of 72 percent. With an early warning like that, doctors could get ahead of symptoms far in advance, allowing Alzheimer's patients to live longer, fuller lives. However, the researchers' work doesn't end here. Sirota tells Laptop Mag that the next step in this process is "further validation and implementation of the models in an independent clinical system and exploration of MS4A6A as a therapeutic target for Alzheimer's disease and Osteoporosis." In short, it's time to see if this study holds up in a real-world clinical setting and an exploration of the gene MS4A6A in the search for better treatments for both Alzheimer's and Osteoporosis. It's an incredible achievement for the sciences and a great example of how effective its use of technology can be in the right hands. As Sirota tells Laptop Mag, "This study wouldn't be possible without the use of AI." ChatGPT is probably the first thing that jumps to mind when most people think of AI, but that's just one small piece of what artificial intelligence has to offer. While many of us use it to generate fun images or summarize emails, researchers worldwide use AI to make groundbreaking discoveries, solve big-picture problems, and save lives. This year has seen a flood of AI-powered breakthroughs in science, technology, and medicine. From securing fusion power systems and building the most intricate images of the human brain to date, these advancements are the start of a new era of scientific discovery and advancement, now supercharged by AI. A study led by UCSF and University of Michigan researchers and published in Nature in November showcased an AI tool to help surgeons identify unseen cancerous tissue during brain tumor surgeries. Cancerous brain tumors can grow back after surgery if even a tiny portion of affected tissue is left behind, so a tool like this could have a monumental impact on countless people's lives in the future. UCSF and University of Michigan researchers found that their AI tool only missed cancerous tissue 3.8% of the time (compared to 24% using conventional methods without AI assistance). That means this tool can successfully spot 96.2% of cancerous brain tissue. Google also applied AI to brain research this year, in collaboration with Harvard University and the Lichtman Laboratory. In May, Google released a gallery of "Neuroglancer" images showing the inner workings of the human brain in incredible detail. The images were built with the help of AI as part of a larger effort to create an interactive 3D model of the brain. Research like this could help doctors and researchers better understand how the brain works, which can be helpful in everything from technology to medicine and even psychology. Medicine isn't the only niche benefiting from AI-powered research. In February, a team of researchers from the University of Princeton and the U.S. Department of Energy's Princeton Plasma Physics Laboratory unveiled a breakthrough in their fusion power research, with the results published in Nature. The team trained an AI model to predict plasma instabilities in fusion reactions up to 300 milliseconds in advance. That might not sound like much, but it's enough time for the AI to react and prevent those pesky instabilities. This is a major breakthrough because it could solve one of the biggest challenges facing fusion power, which can potentially be a world-changing source of clean energy. Fusion reactions are tricky to control, though, which is where AI-powered solutions come in, like the tactic the team at Princeton developed. These breakthroughs are just a small peek into the incredible wave of AI-powered research and advancements from 2024. As AI developers release ever more advanced models with capabilities that bleed further into the realm of science fiction, we could see even more groundbreaking progress in science and medicine thanks to AI. We've already witnessed some of these benefits, as 2024 also saw AI switch from data to dealing with issues in the real world. In July, the first fully automated dental procedure was carried out by AI with the aid of advanced imaging and robotics. In November, AI was the key to an ophthalmology breakthrough that saw one woman, legally blind without glasses, have her eyesight restored beyond 20/20 vision. These advancements paint a bright future of speedier, more affordable, and more easily accessible healthcare down the line, with AI able to assist medical personnel in the diagnosis and treatment of patients. Looking ahead, 2025 may be a year filled with similar breakthroughs, only achievable thanks to the adoption and availability of AI. If you're anything from an AI enthusiast to the average AI tinkerer (or simply seeking out some of the additional features offered through Windows Copilot+ PCs or Apple Intelligence on Macs), then you'll need a powerful and performative laptop to keep up to speed with your needs. At Laptop Mag, we review laptops year-round to ensure we're giving you expert-backed and up-to-date recommendations on which notebook is right for you. When it comes to the best AI PC category, our top picks are the excellent Asus Zenbook S 14 (UX5406) for Windows users and the impressive Apple Macbook Air M3 for those running macOS.
[8]
Remember the year's biggest AI flop? The Humane AI pin's public failure has a silver lining
You already have the best device for AI and it's not the Humane AI pin On April 20, 2023, Imran Chaudhri took the stage at the Vancouver Convention Center. Just a few minutes into his TED Talk on the future of technology, Chaudhri answered a call from his wife... without touching a phone. This was the first public demo of the Humane AI pin. which was stealthily tucked away in Chaudhri's shirt pocket. After a round of applause, Chaudhri laid out his vision for the audience. "My wife, Bethany, and I, and our entire company at Humane, have been working to answer the question of what comes next. And you may ask yourself 'Why, why would anybody do this?' It's because we love building technology that genuinely makes people's lives better." When you watch Chaudhri bring the Humane AI pin to life for the first time, it's hard not to feel a little inspired. It's clear he and his team truly believe in their product and all of the innovation and creativity that went into it. However, If there's one hallmark of the tech world, it's seeing new products make grand promises about a product that will reshape the industry. Unfortunately, for Chaudhri, those promises rarely pan out. That's not the whole story, though. Despite its flaws, Humane did try something new. Even if its execution had its issues, the attempt at innovation revealed a crucial lesson about the realities of AI devices -- and why they struggled to beat the smartphone. Humane was founded in 2018 by husband and wife duo, Imran Chaudhri and Bethany Bongiorno, both former Apple employees. On November 9, 2023, five years after founding Humane, they announced their first product, the AI Pin, a wearable AI-powered badge designed to transcend the smartphone. From Chaudhri's TED Talk to Humane's launch video for the pin, they promised a stylish, capable device that could act as a privacy-conscious AI assistant, a camera, and a communication device rolled into one. The pin is a tiny square small enough to fit in your palm. It does not have a screen and does not need to be tethered to a smartphone. In fact, it was supposed to replace your phone altogether. Through the cloudy vapor of marketing hype, the AI Pin looked and sounded the part. Their vision for the pin was exciting and its design is stylish. It even won a Red Dot award for Best Innovative Product. Humane's presentations and promises had set expectations sky-high -- which is possibly why reviewers and everyday users alike were not happy when they finally got their $699 AI pins almost a year later on April 11, 2024. Reviews began pouring in, and the consensus was grim. The Washington Post called the pin "a promising mess you don't need," and Marques Brownlee, maybe the most influential tech voice on the planet, titled his YouTube review "The Worst Product I've Ever Reviewed." Straight from launch, the Humane AI pin suffered from a litany of problems. The AI assistant frequently answered questions incorrectly and could not complete basic tasks like setting a timer. It requires its own cellular connection, which requires a $24 per month subscription. That separate cellular connection also meant that any texts or calls from the pin were from a separate number from your phone, which makes communication clunky, to say the least. By August 2024, returns of the AI pin hit $1 million, and just a couple of months later, the pin's charging case had to be recalled due to a lithium battery fire hazard. Beyond all of its flaws, the one overarching reason the Humane AI pin can't replace users' smartphones is that, despite its claims, it simply wasn't better than a smartphone. The pin tried to do virtually everything a phone does, but in a creative, quirky form factor that isn't as robust or capable. Even if the pin's only feature was its AI assistant, everyone already has access to countless alternatives through smartphone apps, many of which are completely free. Ultimately, the Humane AI pin is a pricey product that could have been an app, a theme we've witnessed throughout 2024, with AI wedged into a litany of products, seemingly without any justification. It seems like Humane may have learned from the pin's failure, too. On December 4, 2024, the company announced CosmOS, an AI-powered operating system they hope other brands will use in their devices. Laptop Mag tried several times to contact Humane AI about its goals for the future, but the company did not respond to requests for comment on this story The pivot to software could be a winning strategy. After all, OpenAI, which owns ChatGPT, solely focuses on software, including the most popular AI platform in the world. Likewise, Apple is amping up its AI game simply by adding a new AI platform to its existing device line-up. As exciting as innovative hardware is, it doesn't appear to be the key to success in AI. Users want AI to come to the products they already love and are familiar with, not from companies that demand new hardware purchases from them. Whether Humane's AI operating system will rise from the ashes of its pin and find a foothold with users remains to be seen. It looks like Humane doesn't have any partners locked in yet since all of the products seen in the CosmOS trailer are blurred out, including a smart speaker and a car. However, seeing a brand like Humane listen to its users and learn from its mistakes gives us hope for the future of consumer AI tech. If you're anything from an AI enthusiast to the average AI tinkerer (or simply seeking out some of the additional features offered through Windows Copilot+ PCs or Apple Intelligence on Macs), then you'll need a powerful and performative laptop to keep up to speed with your needs. At Laptop Mag, we review laptops year-round to ensure we're giving you expert-backed and up-to-date recommendations on which notebook is right for you. When it comes to the best AI PC category, our top picks are the excellent Asus Zenbook S 14 (UX5406) for Windows users and the impressive Apple Macbook Air M3 for those running macOS.
[9]
Rising to the TOPS: How will NPUs and Windows AI grow in 2025?
Two AI experts weigh in on how on-device AI is going to evolve and what we can expect from it in the new year. 2024 has been a big year for on-device AI in consumer electronics. Both Microsoft and Apple took swings with their respective operating systems, with Microsoft debuting its "Copilot+ PC" branding for AI-capable laptops and Apple releasing Apple Intelligence. These early examples offered mixed results. Some features, like real-time translations and on-device speech-to-text, can be useful. Others, like Microsoft's Windows Recall, have yet to prove themselves. All of this hype for AI has important implications for the new year. 2025 looks set to become the year when mainstream developers make their attempts to add on-device AI to their Windows apps, and that means you're going to want to pay even closer attention to the AI performance of modern Windows laptops before you buy a new one. I spoke with two experts in AI research and testing to probe their brains for insights on how Windows on-device AI will grow in 2025. If you're curious about Windows laptops' AI performance, you'll likely end up comparing the "TOPS" promised by each laptop model. TOPS ("Trillions of Operations Per Second") is a measurement of an NPU's ability to perform matrix multiplications for on-device AI tasks. (Learn more about what an NPU is and why it matters for AI.) 2024 saw big gains in the TOPS performance available from Windows laptops. To qualify for Microsoft's "Copilot PC+" branding, a Windows laptop must have at least 40 TOPS of NPU performance. For reference, Qualcomm's first Copilot+ PCs quoted about 45 TOPS -- that's a four-fold uplift over Intel's "Meteor Lake" Core Ultra 7 165H, which had only quoted 11 TOPS of NPU performance. "I think Qualcomm really woke everyone up," said Karl Freund, founder and principal analyst at Cambrian AI Research. Freund has noted that AMD and Intel have been quick to respond with their own chips, which delivered a similar uplift. By the end of 2024, shoppers looking for a premium Windows laptop -- like a Microsoft Surface, Asus ProArt, or Dell XPS -- can expect a roughly three- or four-fold increase in NPU performance compared to similarly premium laptops that were available at the end of 2023. That's a huge bump up. But will that trend continue into 2025? Ryan Shrout, president of performance testing lab Signal65, thinks it could. "It wouldn't surprise me if we see double again, and triple again wouldn't surprise me." However, he expects those eventual gains to be weighted more towards the end of next year. "My guess is it will be late 2025, and probably into 2026, when we see the most significant NPU improvements." A potential two- to three-fold improvement for on-device AI performance is significant. However, Freund and Shrout warned it's best not to give too much credence to the TOPS performances that chip makers quote. "TOPS really stands for 'Terribly Overused Performance Stat,'" said Freund. "It doesn't have a lot of value." Shrout agreed, comparing TOPS to the TFLOPS figures that AMD and Nvidia often quote when marketing GPUs. These numbers, which point to a GPU's maximum possible computation speed, offer surprisingly little insight into actual real-world performance. Real-world AI performance is currently a bit of a wild card, in part because Windows has yet to coalesce around a single API for tapping an NPU's AI capabilities. That's a problem for owners of Copilot+ laptops that lack a Qualcomm chip inside. Though AMD and Intel have chips that qualify for Copilot+ branding, Qualcomm has enjoyed a favored status so far. Qualcomm machines were the first to receive support for Windows Recall and several popular apps, like Blender and Affinity Photo, which were recently announced to only work on Qualcomm Snapdragon X hardware. That should change through 2025, however, as Microsoft rallies support for its low-level machine learning API (DirectML) and the Windows Copilot Runtime, which includes several task-specific AI APIs (some of which have yet to be released). For now, it's clear that Copilot+ PCs leave a lot to be desired and have lots of room for growth coming up. "I think Microsoft will have this solved in 2025," said Shrout. "Once application developers attach to DirectML, like they did with DirectX, it will be a solved problem. And I don't think it will be a problem for long." Shrout compared it to the early days of 3D on the PC, which initially saw competing APIs but eventually consolidated around the leaders, with Microsoft DirectX becoming the most popular option. Better NPUs and a unified API that makes it easier for Windows application developers to actually use an NPU's full performance are both important steps forward, but they don't necessarily guarantee that on-device AI will become commonplace. That's because developers still have the option to turn towards companies like OpenAI and Anthropic, who make their AI models and services available to any device with internet access. And their AI models are still more capable than on-device AI models, able to do more and generate those results far more quickly. However, those AI models hosted in the cloud have a major downside that will become more relevant in 2025 -- price. "The fact we can have small language models run on an NPU continuously in the background to monitor what's happening, that's something the cloud can't do, or at least would be much more expensive from an infrastructure standpoint," said Shrout. OpenAI's recent release of ChatGPT Pro, a new premium tier for power users, seems to drive this point home. ChatGPT Pro provides unlimited access to the company's new o1 model and priority access to the Sora video generator, but it's priced at $200 per month. The per-token price paid by app developers to make o1 available to users is similarly steep. Users and developers who turn to a Windows laptop's on-device NPU, on the other hand, can essentially use it whenever they want for free. That's arguably going to be the final brick laid in the road towards on-device AI. Developers and users will have both the tools and incentives to rely on a Windows laptop's NPU whenever possible to cut costs. It remains to be seen how quickly the shift towards on-device AI will happen, and to what extent it will proliferate through Windows' software ecosystem, but it's likely that 2025 will be a huge turning point. "I think Qualcomm had it right five years ago when they said AI would move on-device. At first, I was skeptical. But now I've become a believer," said Freund.
Share
Share
Copy Link
A look at how AI shaped various aspects of technology and society in 2024, including AI companions, privacy concerns, product saturation, and its impact on creative industries.
In 2024, AI companions gained significant popularity, particularly among younger users aged 18-25. Platforms like EVA AI offered AI-powered chat partners, including fictional personalities and "AI twins" of real creators 1. These companions provided emotional support and a safe space for users to express themselves, potentially addressing the growing concern of loneliness in an increasingly digital world.
However, the trend raised questions about the impact on real human relationships. Neurologist Steven Novella likened AI companions to "cheesecake," optimized for appeal rather than genuine human connection 1. While they might serve as a supplement to real relationships, concerns persisted about their potential to spoil users for authentic human interactions.
As major tech companies like OpenAI, Google, and Meta competed for AI dominance, data privacy became a significant casualty. A New York Times report revealed that these companies had been using copyrighted works to train their AI models, regardless of potential legal consequences 2.
Google's controversial privacy policy update in July 2023 expanded its ability to use publicly available information for AI training. OpenAI was found to be mining data from YouTube videos using a tool called "Whisper," violating copyright laws 2. These practices highlighted the growing tension between AI development and user privacy.
2024 saw an explosion of AI-infused products across various sectors. From AI-powered pet collars to dedicated AI devices like the Humane AI pin and the Rabbit R1, companies rushed to capitalize on the AI trend 3. However, this led to a phenomenon dubbed "AI fatigue" among consumers.
Many of these products seemed to solve non-existent problems or replicate functions already available through smartphone apps. The market became saturated with AI gimmicks, leaving consumers overwhelmed and questioning the necessity of these often expensive devices 3.
The music industry grappled with the implications of AI in 2024. The Beatles' "Now and Then," an AI-assisted song using John Lennon's recovered vocals, made Grammy history as the first AI-assisted track to receive a nomination 5. This sparked debates about AI's role in music creation and preservation.
Simultaneously, AI-generated music tools like Suno gained popularity, allowing users to create music from text prompts. This led to concerns from record labels about copyright infringement and the potential threat to human musicians 5.
As AI capabilities advanced, questions arose about its potential to replace human workers. The announcement of Devin, an "AI software engineer" by Cognition Labs, initially caused concern in the programming community 4. However, subsequent analysis revealed limitations in AI's ability to fully replace human expertise.
The controversy surrounding Devin highlighted the importance of critical evaluation of AI claims and the need for responsible development and deployment of AI technologies 4.
In conclusion, 2024 was a year of both progress and challenges in AI. While AI showed potential in addressing issues like loneliness and preserving artistic legacies, it also raised significant concerns about privacy, job security, and the authenticity of human experiences. As we move forward, finding a balance between AI innovation and ethical considerations will be crucial for the technology's sustainable integration into society.
Reference
A comprehensive look at the major AI developments in 2024, including legal challenges, technological breakthroughs, and growing privacy concerns.
7 Sources
7 Sources
AI-powered laptops are emerging as the next big trend in personal computing. These devices promise enhanced performance, improved user experiences, and new capabilities that could reshape how we interact with our computers.
2 Sources
2 Sources
An in-depth look at the emerging AI PC market, focusing on the latest developments from major chip manufacturers and the challenges they face in consumer adoption and technological advancement.
8 Sources
8 Sources
An exploration of how AI is reshaping various job sectors, particularly in software engineering, and its integration into consumer technology.
3 Sources
3 Sources
A comprehensive look at the AI landscape in 2024, highlighting key developments, challenges, and future trends in the rapidly evolving field.
8 Sources
8 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved