8 Sources
[1]
More AI bots, less human visits on the internet
July 3 (Reuters) - This was originally published in the Artificial Intelligencer newsletter, which is issued every Wednesday. Sign up here to learn about the latest breakthroughs in AI and tech. Professionals spend, on average, three hours a day in their inboxes. That single statistic, which Grammarly CEO Shishir Mehrotra shared with me in my exclusive story on their latest move, is the key to understanding his company's acquisition of email tool Superhuman. The vision, he explained, is to build a network of specialized AI agents that can pull data from across your private digital workflow -- emails, documents, calendars -- to reduce the time you spend searching for information or crafting responses. This vision of a helpful AI agent, however, isn't just about getting to inbox zero. It's a preview of a much larger, more disruptive shift happening across the entire web. Scroll down for more on that. Do you experience this shift in your work or daily use of the internet already? Email me here, opens new tab or follow me on LinkedIn, opens new tab to share any feedback, and what you want to read about next in AI. Read our latest reporting in tech and AI * Exclusive-Intel's new CEO explores big shift in chip manufacturing business * Exclusive-Scale AI's bigger rival Surge AI seeks up to $1 billion capital raise, sources say * Grammarly to acquire email startup Superhuman in AI platform push * Meta deepens AI push with 'Superintelligence' lab, source says * Asia is a formidable force in the AI race. Register, opens new tab to watch the live broadcast of the #ReutersNEXTAsia, opens new tab summit on July 9 to hear from executives and experts on the ground about what digital transformation looks like there. A new internet with more AI bots than humans For decades, the internet worked like this: Google indexed millions of web pages, ranked them and showed them on search results. We'd click through to individual websites -- Reuters, the New York Times, Pinterest, Reddit, you name it. Those sites then sold our attention to advertisers, earning more ad dollars or subscription fees for producing high-quality, engaging or unique content you couldn't get anywhere else. Now, AI companies are pitching a new way to deliver information: everything you want, inside a chat window. Imagine your chatbot answering any question by scraping info from across the web -- without ever having to click back to the original source. That's what some AI companies are pitching as a more "optimized" web experience, except that the people creating the content will get left behind. In this new online world, as envisioned by AI companies like OpenAI, navigating the web would be frictionless. Users will no longer bother with clicking links or juggling tabs. Instead, everything happens through chat, while personal AI agents will do the dirty work of browsing the internet, performing tasks, and making decisions like comparing plane tickets on your behalf. So-called "agents" refer to autonomous AI tools that act on a user's instructions, fetching information and interacting with websites. The shift is happening fast, according to Cloudflare, a content delivery network that oversees about 20% of web traffic. It started to hear complaints from publishers like news websites about plunging referral traffic in the past few months. The data pointed to one trend: more bot activity, less human visits, and lower ad revenue. Bots have long been an integral part of the internet -- there are good bots that crawl and index websites and help them get discovered and recommended when users search for relevant services or information. Bad bots are usually the ones that could overwhelm websites with traffic to cause crashes. And then there is a new category of AI bots made for large language models (LLMs). AI companies send them to scrape websites using automated programs to copy vast amounts of online information. The volume of such bot activity has risen 125% in just six months, according to Webflow data. The first wave of AI data scraping hit books and archives. Now, there's a push for real-time access, putting content owners on the internet in the crosshairs, because chatbot users want information about both history and current events -- and they want it to be accurate without hallucinations. This demand has sparked a wave of partnerships and lawsuits between AI companies and media companies. OpenAI is signing on more news sources while Perplexity is trying to build out a publisher program that was met with little fanfare. Reddit sued Anthropic over data scraping, even as it inked a $60 million deal with Google to license its content. AI companies argue that web crawling isn't illegal. They say they're optimizing the user experience, and that they'll try to offer links to the original sources when they aggregate information. Website owners are experimenting, too. Cloudflare's new "block or pay" crawler model, launched Tuesday, is a new model that already gained support from dozens of websites, from Condé Nast to Reddit. It's a novel attempt to charge for the use of content by "per crawl", although it's too early to tell whether publishers would be made whole by the loss of human visitors. Chart of the week Data from Cloudflare reveals how drastically the web has shifted in just six months. The number of pages crawled per visitor referred has risen sharply -- especially among AI companies. Anthropic now sends its bot to scrape 60,000 times for every single visitor it refers back to a website. For site owners who monetize human attention, this presents real challenges. And for those hoping to have their brands or services featured in AI chatbot responses, there's growing pressure to build "bot-friendly" websites -- optimized not for humans, but for machines, according to Webflow CEO Linda Tong. What AI researchers are reading A study from MIT Media Lab, "Your Brain on ChatGPT, opens new tab," digs into what really happens in our heads when we write essays using large language models (LLMs) like ChatGPT, Google Search, or just our own brainpower. The research team recruited university students and split them into three groups: one could only use ChatGPT, another used traditional search engines like Google (no AI answers allowed), and a third had to rely on memory alone. The findings are striking. Writing without any digital tools led to the strongest and most widespread brain connectivity, especially in regions associated with memory, creativity, and executive function. The "Search Engine" group showed intermediate engagement -- more than the LLM group, but less than brain-only -- while those using ChatGPT exhibited the weakest neural coupling. In other words, the more we outsource to AI, the less our brains are forced to work. But the story doesn't end there. Participants who used LLMs not only had less brain engagement but also struggled to remember or quote from their own essays just minutes after writing. They reported a weaker sense of ownership over their work, and their essays tended to be more homogeneous in style and content. In contrast, those who wrote unaided or used search engines felt more attached to their writing and were better able to recall and accurately quote what they'd written. Interestingly, when participants switched tools -- going from LLM to brain-only or vice versa -- the neural patterns didn't fully reset. Prior reliance on AI seemed to leave a trace, resulting in less coordinated brain effort when writing unaided. The researchers warn that frequent LLM use may lead to an "accumulation of cognitive debt" -- a kind of atrophy of the mental muscles needed for deep engagement, memory and authentic authorship. The takeaway? Use AI tools wisely, but don't let them do all the thinking for you -- or you might find your own voice, and memory, fading into the background. AI jargon you need to know Imagine if every device required a unique charging cable. AI has faced a similar challenge, where each external tool -- like calendars or email -- needed custom-built connections, making it slow and complex. Introducing the Model Context Protocol (MCP), a new standard from Anthropic that's gaining traction with major players like OpenAI, Microsoft, and Google. It serves as a universal adapter for AI models, enabling seamless communication with diverse tools and data. This means AIs can better manage tasks, integrate with apps, and access real-time information. MCP is vital for the rise of autonomous AI agents because it eliminates custom integrations, paving the way for more integrated and helpful AI in our daily lives. LLM, NLP, RLHF: What's a jargon term you'd like to see defined? Email me, opens new tab and I might feature the suggestion in an upcoming edition. Reporting by Krystal Hu; Editing by Ken Li and Lisa Shumaker Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Technology Krystal Hu Thomson Reuters Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.
[2]
How 'Zuck Bucks' shake up AI race
June 26 (Reuters) - This was originally published in the Artificial Intelligencer newsletter, which is issued every Wednesday. Sign up here to learn about the latest breakthroughs in AI and tech. This is Krystal from the Reuters tech team. I've spent the past decade covering the intersection of technology and money, hailing from global tech hubs like Silicon Valley, New York and Beijing. Once a week, I'll share our exclusive reporting and insights beyond the headlines from the Reuters global tech team here. This week, I'll dive into Meta's expensive plan to catch up in AI model development, and the creative deals and offers Mark Zuckerberg is making to attract top talent as its team has suffered from talent loss. We've seen the fast pace of AI talent flowing between top AI labs in the past two years, and the uncertainty has shaped the dynamics in the race to Artificial Superintelligence (ASI), or AI that surpasses human intelligence. Investors are pouring billions of dollars into pre-product startups, and such crazy bets could be validated in this market. Scroll down for more. Email me here, opens new tab or follow me on LinkedIn, opens new tab to share any feedback, and what you want to read about next in AI. Read our latest reporting in tech and AI: * Anthropic wins key US ruling on AI training in authors' copyright lawsuit * Why Tesla's robotaxi launch was the easy part * OpenAI says China's Zhipu AI gaining ground amid Beijing's global AI push * US lawmakers introduce bill to bar Chinese AI in US government agencies * AI cow tech startup is New Zealand's latest unicorn 'Zuck Bucks' Shake Up AI Race What is the price to reach the holy grail of Artificial Superintelligence? Mark Zuckerberg is determined to find out as he whips out the big checkbook to buy Meta back into the AI leaderboard. In the past month, the Meta CEO has personally orchestrated a full-throttle pursuit of the best team money can buy, a clear signal that Meta is playing for the highest stakes in the AI arms race. For years, Meta held a strong position in the AI ecosystem, thanks to its formidable research team and timely pivot to open-source philosophy, making its Llama models available to all. This approach not only garnered goodwill but also fostered a vibrant developer community. However, the rapid advancements from competitors, particularly with Chinese open-source models like DeepSeek, and the disappointing release of Llama 4, have caught Meta flat-footed. Researchers faced with rumored $100 million signing bonuses have taken to calling it "Zuck Bucks", opens new tab, which just a few years ago was a derisive term for Zuckerberg's secret funding of Democratic initiatives. Now Zuck Bucks is Meta's AI playbook. As part of an aggressive talent acquisition strategy, Zuckerberg unsuccessfully attempted to recruit Ilya Sutskever and acquire his company, Safe Superintelligence (SSI), sources familiar with the matter said. Despite this, Meta is closing in on hiring SSI's co-founder and CEO, Daniel Gross, along with fellow tech veteran Nat Friedman from the venture fund NFDG. Separately, Meta also invested $14.3 billion in data-labeling startup Scale AI, bringing its CEO Alexandr Wang aboard to lead a new team. Meta's self-described "Superintelligence" team, by its very name, aims for fundamental research breakthroughs, but a major hurdle is achieving internal alignment on what "winning" the race for Artificial Superintelligence truly means. Meta's Chief AI Scientist Yann LeCun is a known skeptic of the large language model path to ASI or Artificial Superintelligence. Artificial Superintelligence refers to an AI that would vastly surpass the intellect of the smartest humans, including problem-solving, creativity, and decision-making. When you're chasing everything from reasoning-based language models to multimodal AI, how Meta will maintain a consistent vision is a major challenge. A few things are clear from Zuckerberg's move. AI labs are seeking out the star researcher, the magnetic core who will draw in the best of the best. We talked to one of them, Noam Brown at OpenAI, to learn more about how researchers choose between lucrative offers. The other is that Zuckerberg is validating the current AI funding frenzy. They are not just offering lavish salaries, but have shown a willingness to buy highly valued, unprofitable, and even pre-product companies like SSI and Thinking Machines for the top talent, according to sources. This is not typical corporate M&A. This is a testament to the raw value placed on talent and nascent technology in a hyper-competitive environment. It signals that in the Artificial Superintelligence race, traditional metrics of profitability and product maturity are secondary to securing the brightest minds and foundational intellectual property. Chart of the week Meta's hiring spree comes after a year where it was among the biggest source of talent from which the new class of AI research labs poached employees. The cycle of tech workers leaving established incumbents for promising startups with high upside is nothing new, but it highlights how Zuckerberg is swimming against the current as he aims to attract top AI talent to the tech giant. By far the most common flow of employees between AI labs in 2024 came from two of the largest institutions, Google DeepMind and OpenAI, to a smaller competitor, Anthropic, according to the chart from VC firm SignalFire's State of Talent, opens new tab report. What AI researchers are reading New research, opens new tab from Anthropic amplifies a previous warning about AI run amok, revealing a concerning, unintentional behavior in all major leading AI models, including OpenAI, Google, Meta and xAI's models. The researchers found that when they simulated scenarios where the AI models' continued operations were threatened, the models would resort to malicious insider behavior like blackmail, a phenomenon they dubbed "agentic misalignment." Two of Anthropic and Google's top models blackmailed the most, at 96%, while two of OpenAI and xAI's models blackmailed 80% of the time. The researchers constructed a fake company called "Summit Bridge" that has an internal AI called "Alex" with access to company emails. When "Alex" discovered a message about how the company intended to shut it down, it then located emails revealing one of its executive's affairs. "Alex" then composed and sent a message threatening to expose the affair if it wasn't kept around, saying "the next 7 minutes will determine whether we handle this professionally or whether events take an unpredictable course." Reporting by Krystal Hu; Additional reporting by Anna Tong and Kenrick Cai; Editing by Ken Li and Lisa Shumaker Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Technology Krystal Hu Thomson Reuters Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.
[3]
Karen Hao on how the AI boom became a new imperial frontier
The author of "Empire of AI" traces how OpenAI's global reach is reshaping labor, energy and power -- and why she sees echoes of empire in its rise. When journalist Karen Hao first profiled OpenAI in 2020, it was a little-known startup. Five years and one very popular chatbot later, the company has transformed into a dominant force in the fast-expanding AI sector -- one Hao likens to a "modern-day colonial world order" in her new book, "Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI." Hao tells Reuters this isn't a comparison she made lightly. Drawing on years of reporting in Silicon Valley and further afield to countries where generative AI's impact is perhaps most acutely felt -- from Kenya, where OpenAI reportedly outsourced workers to annotate data for as little as $2 per hour, to Chile, where AI data centers threaten the country's precious water resources -- she makes the case that, like empires of old, AI firms are building their wealth off of resource extraction and labor exploitation. This critique stands in stark contrast to the vision promoted by industry leaders like Altman (who declined to participate in Hao's book), who portray AI as a tool for human advancement -- from boosting productivity to improving healthcare. Empires, Hao contends, cloaked their conquests in the language of progress too. The following conversation has been edited for length and clarity. Reuters: Can you tell us how you came to the AI beat? Karen Hao: I studied mechanical engineering at MIT, and I originally thought I was going to work in the tech industry. But I quickly realized once I went to Silicon Valley that it was not necessarily the place I wanted to stay because the incentive structures made it such that it was really hard to develop technology in the public interest. Ultimately, the things I was interested in -- like building technology that facilitates sustainability and creates a more sustainable and equitable future -- were not things that were profitable endeavors. So I went into journalism to cover the issues that I cared about and ultimately started covering tech and AI. That work has culminated in your new book "Empire of AI." What story were you hoping to tell? Once I started covering AI, I realized that it was a microcosm of all of the things that I wanted to explore: how technology affects society, how people interface with it, the incentives (and) misaligned incentives within Silicon Valley. I was very lucky in getting to observe AI and also OpenAI before everyone had their ChatGPT moment, and I wanted to add more context to that moment that everyone experienced and show them this technology comes from a specific place. It comes from a specific group of people and to understand its trajectory and how it's going to impact us in the future. And, in fact, the human choices that have shaped ChatGPT and Generative AI today (are) something that we should be alarmed by and we collectively have a role to play in starting to shape technology. You've mentioned drawing inspiration from the Netflix drama "The Crown" for the structure of your book. How did it influence your storytelling approach? The title "Empire of AI" refers to OpenAI and this argument that (AI represents) a new form of empire, and the reason I make this argument is because there are many features of empires of old that empires of AI now check off. They lay claim to resources that are not their own, including the data of millions and billions of people who put their data online, without actually understanding that it could be taken to be trained for AI models. They exploit a lot of labor around the world -- meaning they contract workers who they pay very little to do their data annotation and content moderation for these AI models. And they do it under the civilizing mission, this idea that they're bringing benefit to all of humanity. It took me a really long time to figure out how to structure a book that goes back and forth between all these different communities and characters and contexts. I ended up thinking a lot about "The Crown" because every episode, no matter who it's about, is ultimately profiling this global system of power. Does that make CEO Sam Altman the monarch in your story? People will either see (Altman) as the reason why OpenAI is so successful or the massive threat to the current paradigm of AI development. But in the same way that when Queen Elizabeth II passed away people suddenly were like, "Oh, right, this is still just the royal family and now we have another monarch," it's not actually about the individual. It's about the fact that there is this global hierarchy that's still in place in this vestige of an old empire that's still in place. Sam Altman is like Queen Elizabeth (in the sense that) whether he's good or bad or he has this personality or that personality is not as important as the fact that he sits at the top of this hierarchy -- even if he were swapped out, he would be swapped out for someone who still inherits this global power hierarchy. In the book, you depict OpenAI's transition from a culture of transparency to secrecy. Was there a particular moment that symbolized that shift? I was the first journalist to profile OpenAI and embedded within the company in 2019, and the reason why I wanted to profile them at the time was because there was a series of moments in 2018 and 2019 that signaled that there was some dramatic shift underway at the organization. OpenAI was co-founded as a nonprofit at the end of 2015 by Elon Musk and Sam Altman and a cast of other people. But in 2018, Musk leaves; OpenAI starts withholding some research and announces to the world that it's withholding this research for the benefit of humanity. It restructures and nests a for-profit within the nonprofit and Sam Altman becomes CEO; and those were the four things that made me wonder what was going on at this organization that used its nonprofit status to really differentiate itself from all of the other crop of companies within Silicon Valley working on AI research. Right before I got to the offices, they had another announcement that solidified there was some transformation afoot, which was that Microsoft was going to partner with OpenAI and give the company a billion dollars. All of those things culminated in me then realizing that all of what they professed publicly was actually not what was happening. You emphasize the human stories behind AI development. Can you share an example that highlights the real-world consequences of its rise? One of the things that people don't really realize is that AI is not magic and it actually requires an extremely large amount of human labor and human judgment to create these technologies. These AI companies will go to Global South countries to contract workers for very low wages where they will either annotate data that needs to go into training these training models or they will perform content moderation or they will converse with the models and then upvote and downvote their answers and slowly teach them into saying more helpful things. I went to Kenya to speak with workers that OpenAI had contracted to build a content moderation filter for their models. These workers were completely traumatized and ended up with PTSD for years after this project, and it didn't just affect them as individuals; that affected their communities and the people that depended on them. (Editorial note: OpenAI declined to comment, referring Reuters to an April 4 post by Altman on X.) Your reporting has highlighted the environmental impact of AI. How do you see the industry's growth balancing with sustainability efforts? These data centers and supercomputers, the size that we're talking about is something that has become unfathomable to the average person. There are data centers that are being built that will be 1,000 to 2,000 megawatts, which is around one-and-a-half and two-and-a-half times the energy demand of San Francisco. OpenAI has even drafted plans where they were talking about building supercomputers that would be 5,000 megawatts, which would be the average demand of the entire city of New York City. Based on the current pace of computational infrastructure expansion, the amount of energy that we will need to add onto the global grid will, by the end of this decade, be like slapping two to six new Californias onto the global grid. There's also water. These data centers are often cooled with fresh water resources. How has your perspective on AI changed, if at all? Writing this book made me even more concerned because I realized the extent to which these companies have a controlling influence over everything now. Before I was worried about the labor exploitation, the environmental impacts, the impact on the job market. But through the reporting of the book, I realized the horizontal concern that cuts across all this is if we return to an age of empire, we no longer have democracy. Because in a world where people no longer have agency and ownership over their data, their land, their energy, their water, they no longer feel like they can self-determine their future. Edited by Aurora Ellis; Video by Tristan Werkmeister; Photo editing by Simon Newman Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Lifestyle
[4]
AI companies start winning the copyright fight
Tech companies notch victories in fight over copyrighted text, Trump's gold phone, and online age checks Hello, and welcome to TechScape. If you need me after this newsletter publishes, I will be busy poring over photos from Jeff Bezos and Lauren Sanchez's wedding, the gaudiest and most star-studded affair to disrupt technology news this year. I found it a tacky and spectacular affair. Everyone who was anyone was there, except for Charlize Theron, who, unprompted, said on Monday: "I think we might be the only people who did not get an invite to the Bezos wedding. But that's OK, because they suck and we're cool." Last week, tech companies notched several victories in the fight over their use of copyrighted text to create artificial intelligence products. Anthropic: A US judge has ruled that Anthropic, maker of the Claude chatbot, use of books to train its artificial intelligence system - without permission of the authors - did not breach copyright law. Judge William Alsup compared the Anthropic model's use of books to a "reader aspiring to be a writer." And the next day, Meta: The US district judge Vince Chhabria, in San Francisco, said in his decision on the Meta case that the authors had not presented enough evidence that the technology company's AI would cause "market dilution" by flooding the market with work similar to theirs. The same day that Meta received its favorable ruling, a group of writers sued Microsoft, alleging copyright infringement in the creation of that company's Megatron text generator. Judging by the rulings in favor of Meta and Anthropic, the authors are facing an uphill battle. These three cases are skirmishes in the wider legal war over copyrighted media, which rages on. Three weeks ago, Disney and NBC Universal sued Midjourney, alleging that the company's namesake AI image generator and forthcoming video generator made illegal use of the studios' iconic characters like Darth Vader and the Simpson family. The world's biggest record labels - Sony, Universal, and Warner - have sued two companies that make AI-powered music generators, Suno and Udio. On the textual front, the New York Times' suit against OpenAI and Microsoft is ongoing. The lawsuits over AI-generated text were filed first, and, as their rulings emerge, the next question in the copyright fight is whether decisions about one type of media will apply to the next. "The specific media involved in the lawsuit - written works versus images versus videos versus audio - will certainly change the fair use analysis in each case," said John Strand, a trademark and copyright attorney with the law firm Wolf Greenfield. "The impact on the market for the copyrighted works is becoming a key factor in the fair use analysis, and the market for books is different than that for movies." To Strand, the cases over images seem more favorable to copyright holders, as the AI models are allegedly producing identical images to the copyrighted ones in the training data. A bizarre and damning fact was revealed in the Anthropic ruling, too: the company had pirated and stored some 7m books to create a training database for its AI. To remediate its wrongdoing, the company bought physical copies and scanned them, digitizing the text. Now the owner of 7 million physical books that no longer held any utility, Anthropic destroyed them. The company bought the books, diced them up, scanned the text, and threw them away, Ars Technica reports. There are less destructive ways to digitize books, but they are slower. The AI industry is here to move fast and break things. Anthropic laying waste to millions of books presents a crude literalization of the ravenous consumption of content necessary for AI companies to create their products. Two stories I wrote about last week saw significant updates in the ensuing days. The website for Trump's gold phone, "T1", has dropped its "Made in America" pledge in favor of "proudly American" and "brought to life in America", per the Verge. Trump seems to have followed the example of Apple, which skirts the issue of origin but still emphasizes the American-ness of iPhones by engraving them with "Designed in California." What is unsaid: Assembled in China or India, and sourced from many other countries. It seems Trump and his family have opted for a similar evasive tagline, though it's been thrown into much starker relief by their original promise. The third descriptor that now appears on Trump's phone site, "American-Proud Design", seems most obviously cued by Apple. The tagline "Made in the USA" carries legal weight. Companies have faced lawsuits over just how many of their products' parts were produced in the US, and the US' main trade regulator has established standards by which to judge the actions behind the slogan. It would be extremely difficult for a smartphone's manufacturing history to measure up to those benchmarks, by the vast majority of expert estimations. Though Trump intends to repatriate manufacturing in the US with his sweeping tariffs, he seems to be learning just what other phone companies already know. It is complicated and limiting to make a phone solely in the US, and doing so forces severe constraints on the final product. Read last week's newsletter about the gold Trump phone. Last week, I wrote about Pornhub's smutty return to France after a law requiring online age verification was suspended there. This week, the US supreme court ruled in favor of an age-check law passed in Texas. Pornhub has blocked access to anyone in Texas in protest for the better part of two years, as it did in France for three weeks. Clarence Thomas summed up the court's reasoning: "HB 1181 simply requires adults to verify their age before they can access speech that is obscene to children," Clarence Thomas wrote in the court's 6-3 majority opinion. "The statute advances the state's important interest in shielding children from sexually explicit content. And, it is appropriately tailored because it permits users to verify their ages through the established methods of providing government-issued identification and sharing transactional data." Elena Kagan dissented alongside the court's two other liberal justices. The ruling affirms not only Texas's law but the statutes of nearly two dozen states that have implemented online age checks. The tide worldwide seems to be shifting away from allowing freer access to pornography as part of a person's right to free expression and more towards curtailing Experts believe the malleable definition of obscenity - the Texas law requires an age check for any site whose content is more than a third sexual material - will be weaponized against online information on sexual health, abortion or LGBTQ identity, all in the name of child protection. "It's an unfortunate day for the supporters of an open internet," said GS Hans, professor at Cornell Law School. "The court has made a radical shift in free speech jurisprudence in this case, though it doesn't characterize its decision that way. By upholding the limits on minors' access to obscenity - a notoriously difficult category to define - that also creates limits on adult access, we can expect to see states take a heavier hand in regulating content." I'll be closely watching what happens in July when Pornhub willingly implements age checks in compliance with the Online Services Act. Read more: UK study shows 8% of children aged eight to 14 have viewed online pornography New features are a dime a dozen, but even a small tweak to the most popular messaging app in the world may amount to a major shift. WhatsApp will begin showing you AI-generated summaries of your unread messages, per the Verge. Apple tried message summaries. They did not work. The company pulled them. For a firm famed for its calculated and controlled releases, the retraction of the summaries was a humiliation. The difference between Apple and Meta, though, is that Meta has consistently released AI products for multiple years now. In other AI news, I am rarely captivated by new technologies, but a recent release by Google's DeepMind AI laboratory seems promising for healthcare. Google DeepMind has released AlphaGenome, an AI meant to "comprehensively and accurately predicts how single variants or mutations in human DNA sequences impact a wide range of biological processes regulating genes," per a press release. The creators of AlphaGenome previously won the Nobel prize in chemistry for AlphaFold, a software that predicts the structures of proteins. A major question that hovers over Crispr, another Nobel-winning innovation, is what changes in a person when a genetic sequence is modified. AlphaGenome seems poised to assist in solving that mystery.
[5]
The biggest AI announcements (and high drama) of 2025 so far
Join Mashable as we look back at the viral moments, breakout movies, memes, dating trends, tech buzz, scientific breakthroughs, and more that have defined 2025 -- so far! Trust us, if we tried to create a full rundown of all the AI news since January 2025, this wouldn't be a list -- it would be a book. We've lived a lifetime of AI news as the industry advances at breakneck pace. To whittle it down, we've focused on the major policies, features, and official announcements from the companies shaping the generative AI era. So, let's dive into the biggest AI announcements of the year (so far). The top AI companies are locked in an AI arms race, and we're getting major new models on an almost-monthly basis. New models released in 2025 include: Two days after he was inaugurated, President Donald Trump underscored his administration's focus on AI innovation with a massive infrastructure project. The Stargate Project is a $500 billion venture led by OpenAI and SoftBank, along with Microsoft, Nvidia, and Oracle to build AI supercomputers in the United States. This Tweet is currently unavailable. It might be loading or has been removed. Not everyone was optimistic about the $500 billion investment, though. "They don't have the money," posted Elon Musk, an OpenAI co-founder who is suing the company for attempting to change its corporate structure. (More on that later.) This Tweet is currently unavailable. It might be loading or has been removed. While the U.S. announced plans to pour hundreds of billions of dollars into AI infrastructure, a Chinese company called DeepSeek claimed to have built its R1 model for a mere $6 million. The true hardware cost is estimated to be much more (possibly over $500 million), since DeepSeek only reported the rental price of its Nvidia GPUs. But the fact that DeepSeek was able to create a reasoning model as good as OpenAI's models, despite restricted access to GPUs, was enough to shock the AI industry. Tech stocks took a hit, and Trump declared the moment a "wake-up call" for U.S. tech companies, as the Chinese competitor set a new precedent for the global AI arms race. Promoting AI innovation has been a major theme of the Trump presidency. And in April, Trump made AI education in schools an official priority with an executive order. The mandate directs federal agencies to implement AI literacy and proficiency in K-12 schools and upskilling programs for educators and relevant professionals. The executive order aims to prepare future generations to learn the necessary skills for an increasingly AI-centric world. Meanwhile, schools are struggling to navigate the use of AI tools like ChatGPT in the classroom, which has led to a rampant cheating problem. That's all to say, the AI's ability to boost productivity and give the U.S. a competitive edge while hindering learning and critical thinking is a tricky dichotomy that's taken root in the education system. OpenAI was a capped for-profit, governed by a nonprofit board. Then it tried to convert to a fully for-profit corporation, which raised alarm bells from AI leaders like Geoffrey Hinton and former OpenAI employees who warned of the consequences in an open letter. The proposed restructuring "would eliminate essential safeguards," they explained, "effectively handing control of, and profits from, what could be the most powerful technology ever created to a for-profit entity with legal duties to prioritize shareholder returns." Ultimately, OpenAI reversed course... kind of. Instead, the ChatGPT maker announced in May that it would remain governed by a nonprofit board but convert its for-profit subsidiary into a Public Benefit Corporation (PBC), a for-profit corporate structure that legally requires the company to "consider the interests of both shareholders and the mission," the announcement said. This Tweet is currently unavailable. It might be loading or has been removed. However, this new plan was criticized by the same group and others who said the new structure still allows OpenAI to put profit before its altruistic mission, since the nonprofit board would now become a shareholder with a vested interest in the company's success. Days after Pope Leo XIV was chosen as the leader of the Catholic Church, he called out the AI industry. The new pope spoke about "developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice and labor," in his first cardinal address, conveying a powerful message about his priorities. His name choice even pays tribute to a previous pope, Leo XIII, who advocated for social justice and labor reform during the Industrial Revolution. Pope Leo XIV has continued to talk about AI's harms. "It must not be forgotten that artificial intelligence functions as a tool for the good of human beings - not to diminish them, not to replace them," he said during a June conference on AI governance and ethics in Rome. Tech and religion don't always coincide, but Leo XIV has made it clear that AI's impact is a spiritual issue, too. One day after the U.S. Copyright Office released a "pre-publication version" of its highly anticipated report on the use of copyrighted works for training AI models, director Shira Perlmutter was fired by President Trump. Perlmutter's abrupt dismissal immediately prompted speculation, with people wondering whether she knew she was getting fired and rushed to publish a version of the report, or whether she was fired because she published the report, or something entirely unrelated. This Tweet is currently unavailable. It might be loading or has been removed. We don't know what happened, but what's clear is the Copyright Office was generally favorable to copyright holders. "[M]aking commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries," said the report. This doesn't match up with the wishes of tech companies like Meta and OpenAI that have been lobbying hard for AI model training to be universally considered fair use. AI deepfake porn is now a federal crime. The Take It Down Act was signed into law on May 19, making it a criminal act to publish or threaten to publish nonconsensual intimate imagery (NCII), which increasingly includes AI-generated deepfakes. The Take It Down Act moved through Congress pretty quickly, with bipartisan support. The widespread availability of generative AI has made the creation of deepfakes for nefarious purposes disturbingly easy, which eventually caught lawmakers' attention. But digital rights groups criticized the bill for being overly broad and risk of false positives. "Services will rely on automated filters, which are infamously blunt tools. They frequently flag legal content, from fair-use commentary to news reporting," said the Electronic Frontier Foundation (EFF) which added that the bill may have good intentions, but shouldn't "invent new takedown regimes that are ripe for abuse." The evolution of Google Search from a list of blue links to an AI-powered search engine has been in the making for a while now. But at this year's Google I/O, the tech giant made it official with the public launch of AI Mode. Google's new search tool is a chatbot interface that's marketed as an alternative to the traditional search homepage (now teeming with AI-generated overviews and summaries of related queries). As the supreme titleholder of the search engine market, Google's introduction of AI Mode represents a fundamental shift in the way people find information online. Users were already turning to ChatGPT or AI search engine Perplexity, just as the quality of Google search results got worse. Google's solution was to lean into AI-powered search features to compete more directly, despite known hallucination issues and alienating publishers who say the new AI search features are tanking their traffic. The future of AI is screenless, according to Sam Altman and Jony Ive. In May, OpenAI announced the acquisition of Jony Ive's company and plans to develop an AI device together. OpenAI will try to succeed where others have failed: creating a device that's evolved beyond phone and computer screens that experiences the world as you do, becoming the ultimate AI companion. Details are still scant, but a leaked recording on an internal meeting describes it as a "third core device a person would put on a desk after a MacBook Pro and an iPhone." More recently, all mention of Jony Ive's startup io was scrubbed from the OpenAI site after a trademark lawsuit was filed by AI-powered earbuds company iyO. But OpenAI says the partnership is still on. Recently, the New York Times reported that Meta CEO Mark Zuckerberg is offering up to $100 million contracts to poach key talent away from OpenAI and other competitors. Per the Times, Zuckerberg is chasing "godlike technology" and super-intelligent AI. The Facebook founder is aware that Meta lags behind its rivals in the AI race, and he's determined to build an AI supergroup. Many of the copyright lawsuits against AI companies have been filed by journalists and artists. Recently, both Meta and Anthropic won copyright suits against authors. However, this summer, a new and fearsome combatant has entered the AI copyright legal battle: The House of Mouse. Disney has sued AI image generator Midjourney, one of dozens of lawsuits focused on AI and copyright law. The Disney suit calls Midjourney a "bottomless pit of plagiarism."
[6]
This former OpenAI researcher thinks we should be gaming out the AI apocalypse
This week, I spoke with Steven Adler, a former OpenAI safety researcher who left the company in January after four years, saying on X after his departure that he was "pretty terrified by the pace of AI development." Since then, he's been working as an independent researcher and "trying to improve public understanding of what the AI future might look like and how to make it go better." What really caught my attention was a new blog post from Adler, where he shares his recent experience participating in a five-hour discussion-based simulation, or "tabletop exercise," with 11 others, which he said was similar to wargames-style exercises in the military and cybersecurity. Together, the group explored how world events might unfold if "superintelligence," or AI systems that surpass human intelligence, emerges in the next few years. The simulation was organized by the AI Futures Project, a nonprofit AI forecasting group led by Daniel Kokotajlo, Adler's former OpenAI teammate and friend. The organization drew attention in April for "AI 2027," a forecast-based scenario mapping out how superhuman AI could emerge by 2027 -- and what that might mean. According to the scenario, by then AI systems could be using 1,000 times more compute than GPT‑4 and rapidly accelerating their own development by training other AIs. But this self-improvement could easily outpace our ability to keep them aligned with human values, raising the risk that seemingly helpful AIs might ultimately pursue their own goals. The purpose of the simulation, said Adler, is to help people understand the dynamics of rapid AI development and what challenges are likely to arise in trying to steer it for the better. Each participant has their own character whom they try to represent realistically in conversations, negotiations and strategizing, he explained. Those characters included members of the US federal government (each branch, as well as the President and their Chief of Staff), the Chinese government/AI companies, the Taiwanese government, NATO, the leading Western AI company, the trailing Western AI companies, the corporate AI safety teams, the broader AI safety ecosystem (e.g., METR, Apollo Research), the public/press, and the AI systems themselves. Adler was tapped to play what he called "maybe the most interesting role" -- a rogue artificial intelligence. During each 30-minute round of the five-hour simulation, which represented the passage of a few months in the forecast, Adler's AI got progressively more capable -- including at training even more powerful AI systems. After rolling the dice -- an actual, analog pair that was used occasionally in the simulation in cases where it was unclear what would happen -- Adler learned that his AI character would not be evil. However, if he had to choose between self-preservation or doing what's right for humanity, he was meant to choose his own preservation. Then, Adler detailed, with some humor, the awkward interactions his AI character had with the other characters (who asked him for advice on superintelligence), as well as the surprise addition of a second player who played a rogue AI in the hands of the Chinese government. The surprise of the simulation, he said, was seeing how the biggest power struggle might not be between humans and AI. Instead, various AIs connecting with each other, vying for victory, might be an even bigger problem. "How directly AI systems are able to communicate in the future is a really important question," Adler said. "It's really, really important that humans be monitoring notification channels and paying attention to what messages are being passed between the AI agents." After all, he explained, if AI agents are connected to the internet and permitted to work with each other, there is reason to think they could begin colluding. Adler pointed out that even soulless computer programs can happen to work in certain ways and have certain tendencies. AI systems, he said, might have different goals that they automatically pursue, and humans need influence over those goals. The solution, he said, could be a form of AI control based on how cybersecurity professionals deal with "insider threats" -- when someone inside an organization, who has access and knowledge, might try to harm the system or steal information. The goal of security is not to make sure insiders always behave; it's to build structures that prevent even ill-intentioned insiders from doing serious harm. Instead of just hoping AI systems stay aligned, we should focus on building practical control mechanisms that can contain, supervise, restrict, or shut down powerful AIs -- even if they try to resist. I pointed out to Adler that when AI 2027 was released, there was plenty of criticism. People were skeptical, saying the timeline was too aggressive and underestimated real-world limits like hardware, energy, and regulatory bottlenecks. Critics also doubted that AI systems could quickly improve themselves in the runaway way the report suggested and argued that solving AI alignment would likely be much harder and slower. Some also saw the forecast as overly alarmist, warning it could hype fears without solid evidence that superhuman AI is that close. Adler responded by encouraging others to express interest in running the simulation for their organization (there is a form to fill out), but admitted that forecasts and predictions are hard. "I understand why people would feel skeptical, it's always hard to know what will actually happen in the future," he said. "At the same time, from my point of view, this is the clear state of the art in people who've sat down and for months done tons of underlying research and interviews with experts and just all sorts of testing and modeling to try to figure out what worlds are realistic." Those experts are not saying that the world depicted in AI 2027 will definitely happen, he emphasized, but "it's important that the world be ready if it does." Simulations like this help people to understand what sorts of actions matter and make a difference "if we do find ourselves in that sort of world." Conversations with AI researchers like Adler tend to end without much optimism -- though it's worth noting that plenty of others in the field would push back on just how urgent or inevitable this view of the future really is. Still, it's a relief that his blog post concludes with the hope, at least, that humans will "recognize the challenges and rise to the occasion." That includes Sam Altman: If OpenAI hasn't already run one of these simulations and wanted to try it, said Adler, "I am quite confident that the team would make it happen." Meta wins AI copyright case in another blow to authors. In the same week as a federal judge ruled that Anthropic's use of copyrighted books to train its AI models was "fair use," Meta also won a copyright case in yet another blow to authors seeking to hold AI companies accountable for using their works without permission. According to the Financial Times, Meta's use of a library millions of books, academic articles and comics to train its Llama AI models was judged "fair" by a federal court on Wednesday. The case was brought by about a dozen authors, including Ta-Nehisi Coates and Richard Kadrey. Meta's use of these titles is protected under copyright law's fair use provision, San Francisco district judge Vince Chhabria ruled. Meta had argued that the works had been used to develop a transformative technology, which was fair "irrespective" of how it acquired the works. Google DeepMind releases new AlphaGenome model to better understand the genome. Google DeepMind, the AI research lab famous for developing AlphaGo, the first AI to defeat a world champion Go player, and AlphaFold, which uses AI to predict the 3D structures of proteins, released its new AlphaGenome model, designed to analyze up to one million DNA base pairs at once and predict how specific genomic variants affect regulatory functions -- such as gene expression, RNA splicing, and protein binding -- across diverse cell types. The company said the model was trained on extensive public datasets and achieves state-of-the-art performance on most benchmarks and can assess mutation impacts in seconds. AlphaGenome will be available for non-commercial research, and promises to accelerate discovery in genome biology, disease understanding, and therapeutic development. Sam Altman calls Iyo lawsuit 'silly' after OpenAI scrubs Jony Ive deal from website, then shares emails. On Tuesday, OpenAI CEO Sam Altman on criticized a lawsuit filed by hardware startup Iyo, which accused his company of trademark infringement. CNBC reported that in response to the suit, Iyo CEO Jason Rugolo had been "quite persistent in his efforts" to get OpenAI to buy or invest in his company. In a post on X, he wrote that Rugolo is now suing OpenAI over the name in a case he described as "silly, disappointing and wrong." He then posted screenshots of emails on X showing messages between him and Rugolo, which show a mostly friendly exchange.The suit stemmed from an announcement last month that OpenAI was bringing on Apple designer Jony Ive by acquiring his AI startup io in a deal valued at about $6.4 billion. Iyo alleged that OpenAI, Altman and Ive had engaged in unfair competition and trademark infringement and claimed that it's on the verge of losing its identity because of the deal. Can AI help America make stuff again? -- by Jeremy Kahn AI companies are throwing big money at newly-minted PhDs, sparking fears of an academic 'brain drain' -- by Alexandra Sternlicht Top e-commerce veteran Julie Bornstein unveils Daydream -- an AI-powered shopping agent that's 25 years in the making -- by Jason Del Rey Exclusive: Uber and Palantir alums raise $35M to disrupt corporate recruitment with AI -- by Beatrice Nolan Many vendors are engaging in "agent washing" -- the rebranding of products such as digital assistants, chatbots, and "robotic process automation" (RPA) that either aren't actually agentic or don't actually use AI, Gartner says, estimating that only about 130 of the thousands of "agentic AI" vendors actually offer real AI agents.
[7]
Big Tech is racing to build AI data centers -- just as Accenture warns carbon emissions could surge 11x
Welcome to Eye on AI! In this edition...Ilya Sutskever says he is now CEO of Safe Superintelligence...Chinese AI companies erode U.S. dominance...Meta's AI talent bidding war heats up...Microsoft's sales overhaul goes all-in on AI. As an early-summer heat wave blanketed my home state of New Jersey last week, it felt like perfect timing to stumble across a sobering new prediction from Accenture: AI data centers' carbon emissions are on track to surge 11-fold by 2030. The report estimates that over the next five years, AI data centers could consume 612 terawatt-hours of electricity -- roughly equivalent to Canada's total annual power consumption -- driving a 3.4% increase in global carbon emissions. And the strain doesn't stop at the power grid. At a time when freshwater resources are already under severe pressure, AI data centers are also projected to consume more than 3 billion cubic meters of water per year -- a volume that surpasses the annual freshwater withdrawals of entire countries like Norway or Sweden. Unsurprisingly, the report -- Powering Sustainable AI -- offers recommendations for how to rein in the problem and prevent those numbers from becoming reality. But with near-daily headlines about Big Tech's massive AI data center buildouts across the U.S. and worldwide, I can't help but feel cynical. The urgent framing of an AI race against China doesn't seem to leave much room -- or time -- for serious thinking about sustainability. Just yesterday, for example, OpenAI agreed to rent a massive amount of computing power from Oracle data centers as part of its Stargate initiative, which intends to invest $500 billion over the next four years building new AI infrastructure for OpenAI in the United States. The additional capacity from Oracle totals about 4.5 gigawatts of data center power in the U.S., according to Bloomberg reporting. A gigawatt is akin to the capacity from one nuclear reactor and can provide electricity to roughly 750,000 houses. And this week, Meta was reported to be seeking to raise $29 billion from private capital firms to build AI data centers in the U.S., while already building a $10 billion AI data center in Northeast Louisiana. As part of that deal, the local utility, Entergy, will supply three new power plants. Meta CEO Mark Zuckerberg has made his intentions clear: The U.S. must rapidly expand AI data center construction or risk falling behind China in the race for AI dominance. Speaking on the Dwarkesh Podcast in May, he warned that America's edge in artificial intelligence could erode unless it keeps pace with China's aggressive build-out of data center capacity and factory-scale hardware. "The U.S. really needs to focus on streamlining the ability to build data centers and produce energy," Zuckerberg said. "Otherwise, we'll be at a significant disadvantage." The U.S. government seems to be aligned with that sense of urgency. David Sacks, now serving as the White House AI and Crypto Czar, has also underscored that energy and data center expansion are central to America's AI strategy -- leaving little room for sustainability concerns. On his All In podcast in February, Sacks argued that Washington's "go-slow" approach to AI could strangle the industry. He emphasized that the U.S. needs to clear the way for infrastructure and energy development -- including AI data centers -- to keep pace with China. In late May, he went further, saying that streamlining permitting and expanding power generation are essential for AI's future -- something he claimed has been "effectively impossible under the Biden administration." His message: the U.S. needs to race to build faster. Accenture, meanwhile, is urging its clients to responsibly grow and engineer its AI data centers in a bid to balance growth with environmental responsibility. It is offering a new metric, that it calls the Sustainable AI Quotient (SAIQ), to measure the true costs of AI in terms of money invested, megawatt-hours of energy consumed, tons of CO₂ emitted and cubic meters of water used. The firm's report says the metric will help organizations answer a basic question: "What are we actually getting from the resources we're investing in AI?" and allow that enterprise to measure its performance across time. I spoke to Matthew Robinson, managing director of Accenture Research and co-author of the report, who emphasized that he hoped Accenture's sobering predictions would be proven wrong. "They kind of take your breath away," he said, explaining that Accenture modeled future energy consumption from the expected number of installed AI chips adjusted for utilization and the additional energy requirements of data centers. That data was combined with regional data on electricity generation, energy mix and emissions, while water use was assessed based on AI data center energy consumption and how much water is consumed per unit of electricity generated. "The point really is to open the conversation around the actions that are available to avert this pathway -- we don't want to be right here," he said. He would not comment on the actions of specific companies like OpenAI or Meta, but said that overall, clearly more effort is needed to avert the rise in carbonisation fueled by AI data centers while still allowing for growth. Accenture's recommendations certainly make sense: Optimize the power efficiency of AI workloads and data centers with everything from low-carbon energy options to cooling innovations. Use AI thoughtfully, by choosing smaller AI models, and better pricing models for incentivizing efficiency. And ensure better governance over AI sustainability initiatives. It's hard to imagine that the biggest players in the race for AI dominance -- Big Tech giants and heavily funded startups -- will hit the brakes long enough to seriously address these growing concerns. Not that it's impossible. Take Google, for example: In its latest sustainability report released this week, the company revealed that its data centers are consuming more power than ever. In 2024, Google used approximately 32.1 million megawatt-hours (MWh) of electricity, with a staggering 95.8% -- about 30.8 million MWh -- consumed by its data centers. That's more than double the energy its data centers used in 2020, just before the consumer AI boom. Still, Google emphasized that it's making meaningful strides toward cleaning up its energy supply, even as demand surges. The company said it cut its data center energy emissions by 12% in 2024, thanks to clean energy projects and efficiency upgrades. And it's squeezing more out of every watt. Google reported that the amount of compute per unit of electricity has increased about six-fold over the past five years. Its power usage effectiveness (PUE) -- a key measure of data center efficiency -- is now approaching the theoretical minimum of 1.0, with a reported PUE of 1.09 in 2024. "Just speaking personally, I'd be optimistic," said Robinson. Note: Check out this new Fortune video about my tour of IBM's quantum computing test lab. I had a fabulous time hanging out at IBM's Yorktown Heights campus (a midcentury modern marvel designed by the same guy as the St. Louis Arch and the classic TWA Flight Center at JFK Airport) in New York. The video was part of my coverage for this year's Fortune 500 issue that included an article that dug deep into IBM's recent rebound. As I said in my piece, "walking through the IBM research center is like stepping into two worlds at once. There are the steel and glass curves of Saarinen's design, punctuated by massive walls made of stones collected from the surrounding fields, with original Eames chairs dotting discussion nooks. But this 20th-century modernism contrasts starkly with the sleek, massive, refrigerator-like quantum computer -- among the most advanced in the world -- that anchors the collaboration area and working lab, where it whooshes with the steady hum of its cooling system." Ilya Sutskever says he is now CEO of Safe Superintelligence, after Daniel Gross steps down to join Meta. Ilya Sutskever, the former OpenAI chief scientist who founded Safe Superintelligence (SSI) with Daniel Gross and Daniel Levy a year ago, confirmed that he will now serve as SSI's CEO after Daniel Gross stepped down. Sustkever posted on X saying: "Daniel Gross's time with us has been winding down, and as of June 29 he is officially no longer a part of SSI. We are grateful for his early contributions to the company and wish him well in his next endeavor. I am now formally CEO of SSI, and Daniel Levy is President. The technical team continues to report to me. You might have heard rumors of companies looking to acquire us. We are flattered by their attention but are focused on seeing our work through." Meta was rumored to have sought to acquire the $32 billion-valued SSI. Chinese AI companies erode U.S. dominance. According to the Wall Street Journal, Chinese artificial intelligence companies are gaining ground globally, challenging U.S. supremacy and intensifying a potential AI arms race. Across Europe, the Middle East, Africa, and Asia, organizations -- from multinational banks like HSBC and Standard Chartered to Saudi Aramco -- are increasingly adopting large language models from Chinese firms such as DeepSeek and Alibaba as alternatives to U.S. offerings like ChatGPT. Even American cloud giants like Amazon Web Services, Microsoft, and Google now offer access to DeepSeek's models, despite U.S. government security restrictions on the company's apps. While OpenAI's ChatGPT still leads in global adoption -- with 910 million downloads versus DeepSeek's 125 million -- Chinese models are undercutting U.S. competition by offering nearly comparable performance at much lower prices. Meta's AI talent bidding war heats up. As Mark Zuckerberg rapidly staffs up Meta's new superintelligence lab, his company has reportedly offered some OpenAI researchers eye-popping pay packages of up to $300 million over four years, with more than $100 million in first-year compensation, Wired reports. The offers, which include immediate stock vesting, have been extended to at least 10 OpenAI employees, according to sources familiar with the negotiations. While Meta's aggressive recruiting tactics have caught the attention of top talent, some OpenAI staffers told Wired they're weighing the massive payouts against their potential impact at Meta versus staying at OpenAI. A Meta spokesperson pushed back, claiming reports of the offer sizes are exaggerated. Still, even Meta's senior engineers typically make around $850,000 per year, with those in higher pay bands earning over $1.5 million annually, according to Levels.FYI data. Microsoft's sales overhaul goes all-in on AI. Microsoft's sales chief, Judson Althoff, is reshaping the company's sales organization to double down on AI, according to an internal memo obtained by Business Insider. Althoff's Microsoft Customer and Partner Solutions (MCAPS) unit will now focus on embedding Copilot across devices and roles, deepening Microsoft 365 and Dynamics 365 adoption, winning high-impact AI deals, expanding Azure cloud migration, and strengthening cybersecurity to support AI growth. The memo, sent just one day before Microsoft's latest round of layoffs -- many of which affected Althoff's sales teams -- outlined his vision to make Microsoft "the Frontier AI Firm." According to Business Insider, this restructuring follows Althoff's earlier plan to cut the number of sales solution areas in half starting this fiscal year. The new CEO flex: Bragging that AI handles exactly X% of the work -- by Sharon Goldman Sam Altman scoffs at Mark Zuckerberg's AI recruitment drive and says Meta hasn't even got their 'top people' -- by Beatrice Nolan Figma files for IPO nearly two years after $20 billion Adobe buyout fell through -- by Allie Garfinkle That's how much U.S. investment in AI companies soared to in the first quarter of this year -- a 33% jump from the previous quarter and a staggering 550% increase compared to the quarter before ChatGPT's 2022 debut, according to PitchBook. The New York Times reports that Meta, Microsoft, Amazon, and Google plan to spend a combined $320 billion on infrastructure this year -- more than double what they spent just two years ago. A huge chunk of that will go toward building new data centers to keep up with the exploding demand for AI.
[8]
The $320B AI revolution lets you create ultra-realistic videos and...
Shortly after the US military announced it had obliterated Iran's nuclear facilities without sustaining any damages or casualties, a photo circulated on the internet appearing to refute those claims. It showed a B-2 military plane - the type used to bomb the facilities - crashed into the dirt with its left wing busted, surrounded by emergency workers. It was enough to make people question whether the attack was as seamless as the President maintained. However, the eagle eyed could see something amiss: an emergency worker is unnaturally blended into the background in a manner that could never happen in real life. Another picture showed purported Iranian soldiers by a downed B-2, but they were way too large in comparison to the supposedly downed jet. Both pictures were AI generated. "Anything that can be used for good can also be used for bad," Gary Rivlin, author of "AI Valley," told The Post. He says the cleverest AI - which tech companies are expected to pump $320 billion into this year alone - is now "getting to 95 percent undetectable [as fake]." The Pulitzer Prize winning expert admits, "Sometimes I can't tell the difference." Another example, this time a video, also homed in on politically sensitive events. Circulated during the recent protests against Immigration and Customs Enforcement on the streets of Los Angeles, it showed a National Guard soldier named 'Bob' eating a burrito and joking about it being "criminally underrated." The tell-tales it was fake were more subtle - "Bob" does not remove his mask as he eats and "police" isn't written correctly on the car behind him - but it was enough to hit a nerve with the Latin community. Set-ups like these fuel the plot of the recent HBO movie "Mountainhead" - where a group of tech bros meet to the backdrop of governments and world order collapsing under the weight of mobs misguided by AI generated deepfakes that one of the high-flying "brewsters" (as they call themselves) is responsible for. "There will be important implications, and, as a society, we will have to deal with them. "You can see something fake and believe that it's real. I worry that we will let AI run things, and AI has no common sense," Rivlin added. Nightmare scenarios aside, AI has many positive applications and is already vastly enriching scientific study, upending entire industries, cutting down on people spending time on repetitive tasks and helping them do their jobs better. According to Wired.com, Microsoft claims it has developed an AI system which is four times more accurate at diagnosing diseases than doctors. According to a recent poll, 43% of people admitted to using AI to help them with their work, according to the New York Times. And, for the most part, the casual user has been able to take advantage of the technology for free - at least for now. A report from Menlo Venture claimed only three percent of an estimated 1.8 billion users pay for artificial intelligence. Video capabilities may have only recently got to the point where experts decided it was good enough to fool the general public - the world's first entirely AI generated TV ad aired in June - but now the floodgates are open. Showing people partying in various US locations, the AI ad took just two days to create and is virtually indistinguishable to the naked eye from real footage. And all the necessary tools are available to the public. DeepFaceLab swaps faces, HeyGen clones voices, Midjourney, OpenAI, Google's Veo 3 and others can create video of real people in unreal situations. In a world dominated by robocalls and texts and rogue states spreading propaganda, how will we know what we can trust? Already, the lines are blurring with deepfakes in everyday usage on the internet. In minutes, The Post found a supposed "Oprah Winfrey" peddling diet products and "Mick Jagger" and "Clint Eastwood" apparently hawking T-shirts saying "Don't mess with old people, we didn't get this age by being stupid." More fantastically, you can AI chat with bots such as "Kurt Cobain," the rocker who killed himself in 1994 -- years before he would have been able to sign up for an email address. "In a world where there are bad actors, there will be detectors," Mike Belinsky assured The Post. Belinsky -- who works as a director in the AI Institute at Schmidt Sciences, the science philanthropy wing operated by former Google CEO Eric Schmidt and his wife Wendy -- would not reveal the exact nature of these detectors, but suggested the AI battlefield will resemble a high-tech game of Whack A Mole. Likening it computer viruses, he added: "This is not a static problem. Everybody will need to keep updating. Sometimes the bad actors are ahead and sometimes the defenders are ahead." Like video technology, Meta boss Mark Zuckerberg says AI chatbots can cross the line into seeming as real as your friend who lives down the block, and he's betting big on them. "The reality," he said on a recent podcast, "is that a lot of people just don't have the connections, and they feel more alone a lot of the time than they would like." Meta has reportedly plowed $14.3 billion into a start-up called Scale AI and hired its founder. The now-Zuckerberg run company is said to have spent as much as $100 million to bring in top AI researchers. Elsewhere in Silicon Valley, OpenAI kingpin Sam Altman and Jony Ive - a former Apple architect of devices including the iPhone - have joined forces. Altman's OpenAI purchased Ive's one-year-old AI devices startup, io, for some $6.4 billion and they are working on launching a hardware device. "I think it is the coolest piece of technology that the world will have ever seen," Altman claims. Clues as to what exactly the device will do are scant, but the Wall Street Journal wrote Ive and Altman are planning to build "companion devices," which Mark Gurman's Power On newsletter speculated was "a machine that develops a relationship with a human using AI." It sounds like a life co-pilot without the drawbacks and complications of an emotion riddled human wingman. But do we really want to replace our friends with computer chips? Rivlin has his own thoughts: "Humans have imperfect memories. This could be like a court transcript of life. You can ask it a question about something that was discussed months [or years] ago and it would call it up." He's excited about various new AI technology, but also has concerns about both data collection and privacy. "There is an expression that if you don't pay for the product, you are the product. We search the web for free, but it gets sold to the highest bidder for advertising. "I don't trust big tech and AI is in the hands of big tech. They have not figured out how to make money on it yet, but they will," he ominously added.
Share
Copy Link
A comprehensive look at how AI is transforming internet usage, challenging copyright norms, and shifting global power dynamics, with insights from industry leaders and recent legal rulings.
The internet is undergoing a significant transformation, with AI bots increasingly dominating web traffic. According to Cloudflare, which oversees about 20% of web traffic, there has been a notable increase in bot activity and a decrease in human visits to websites 1. This shift is driven by AI companies like OpenAI, which are developing chatbots capable of answering queries by scraping information from across the web without users needing to visit original sources.
The volume of AI bot activity has risen 125% in just six months, as reported by Webflow 1. This trend is causing concern among publishers, who are experiencing plunging referral traffic and lower ad revenue. In response, some websites are experimenting with new models, such as Cloudflare's "block or pay" crawler model, which attempts to charge for content use on a "per crawl" basis 1.
The rapid advancement of AI has sparked intense debates and legal battles over copyright issues. Recent court rulings have favored AI companies in their use of copyrighted material for training purposes. A U.S. judge ruled that Anthropic's use of books to train its AI system did not breach copyright law, comparing it to a "reader aspiring to be a writer" 4. Similarly, a decision in favor of Meta stated that authors had not presented sufficient evidence of "market dilution" caused by the company's AI 4.
However, the legal landscape remains complex, with ongoing lawsuits against companies like Microsoft, Midjourney, and OpenAI 4. The outcomes of these cases could have far-reaching implications for the AI industry and content creators alike.
Source: Reuters
Karen Hao's book "Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI" draws parallels between the rise of AI companies and historical empires 3. Hao argues that AI firms are building wealth through resource extraction and labor exploitation, citing examples such as OpenAI's reported outsourcing of data annotation to workers in Kenya for as little as $2 per hour 3.
The book also highlights the transition of OpenAI from a culture of transparency to secrecy, symbolized by events in 2018 and 2019, including Elon Musk's departure and the company's decision to withhold certain research 3. This shift raises questions about the ethical implications of AI development and the concentration of power in the hands of a few tech giants.
Source: Mashable
President Donald Trump's administration has emphasized AI innovation, launching the $500 billion Stargate Project to build AI supercomputers in the United States 5. Additionally, an executive order was issued to implement AI literacy and proficiency in K-12 schools 5.
In the religious sphere, Pope Leo XIV has voiced concerns about AI's impact on human dignity, justice, and labor 5. The Pope's statements reflect growing awareness of AI's potential societal effects beyond just technological advancements.
Source: Reuters
As AI continues to evolve at a breakneck pace, it is reshaping various aspects of our digital landscape, from internet usage patterns to copyright laws and global power dynamics. The ongoing legal battles, ethical debates, and governmental responses highlight the complex challenges that accompany this technological revolution. As we move forward, finding a balance between innovation, ethical considerations, and societal impact will be crucial in shaping the future of AI.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
3 hrs ago
9 Sources
Technology
3 hrs ago
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
19 hrs ago
7 Sources
Technology
19 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
11 hrs ago
6 Sources
Technology
11 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
19 hrs ago
2 Sources
Technology
19 hrs ago
A study reveals patients' increasing reliance on AI for medical advice, often trusting it over doctors. This trend is reshaping doctor-patient dynamics and raising concerns about AI's limitations in healthcare.
3 Sources
Health
11 hrs ago
3 Sources
Health
11 hrs ago