2 Sources
[1]
How 'Zuck Bucks' shake up AI race
June 26 (Reuters) - This was originally published in the Artificial Intelligencer newsletter, which is issued every Wednesday. Sign up here to learn about the latest breakthroughs in AI and tech. This is Krystal from the Reuters tech team. I've spent the past decade covering the intersection of technology and money, hailing from global tech hubs like Silicon Valley, New York and Beijing. Once a week, I'll share our exclusive reporting and insights beyond the headlines from the Reuters global tech team here. This week, I'll dive into Meta's expensive plan to catch up in AI model development, and the creative deals and offers Mark Zuckerberg is making to attract top talent as its team has suffered from talent loss. We've seen the fast pace of AI talent flowing between top AI labs in the past two years, and the uncertainty has shaped the dynamics in the race to Artificial Superintelligence (ASI), or AI that surpasses human intelligence. Investors are pouring billions of dollars into pre-product startups, and such crazy bets could be validated in this market. Scroll down for more. Email me here, opens new tab or follow me on LinkedIn, opens new tab to share any feedback, and what you want to read about next in AI. Read our latest reporting in tech and AI: * Anthropic wins key US ruling on AI training in authors' copyright lawsuit * Why Tesla's robotaxi launch was the easy part * OpenAI says China's Zhipu AI gaining ground amid Beijing's global AI push * US lawmakers introduce bill to bar Chinese AI in US government agencies * AI cow tech startup is New Zealand's latest unicorn 'Zuck Bucks' Shake Up AI Race What is the price to reach the holy grail of Artificial Superintelligence? Mark Zuckerberg is determined to find out as he whips out the big checkbook to buy Meta back into the AI leaderboard. In the past month, the Meta CEO has personally orchestrated a full-throttle pursuit of the best team money can buy, a clear signal that Meta is playing for the highest stakes in the AI arms race. For years, Meta held a strong position in the AI ecosystem, thanks to its formidable research team and timely pivot to open-source philosophy, making its Llama models available to all. This approach not only garnered goodwill but also fostered a vibrant developer community. However, the rapid advancements from competitors, particularly with Chinese open-source models like DeepSeek, and the disappointing release of Llama 4, have caught Meta flat-footed. Researchers faced with rumored $100 million signing bonuses have taken to calling it "Zuck Bucks", opens new tab, which just a few years ago was a derisive term for Zuckerberg's secret funding of Democratic initiatives. Now Zuck Bucks is Meta's AI playbook. As part of an aggressive talent acquisition strategy, Zuckerberg unsuccessfully attempted to recruit Ilya Sutskever and acquire his company, Safe Superintelligence (SSI), sources familiar with the matter said. Despite this, Meta is closing in on hiring SSI's co-founder and CEO, Daniel Gross, along with fellow tech veteran Nat Friedman from the venture fund NFDG. Separately, Meta also invested $14.3 billion in data-labeling startup Scale AI, bringing its CEO Alexandr Wang aboard to lead a new team. Meta's self-described "Superintelligence" team, by its very name, aims for fundamental research breakthroughs, but a major hurdle is achieving internal alignment on what "winning" the race for Artificial Superintelligence truly means. Meta's Chief AI Scientist Yann LeCun is a known skeptic of the large language model path to ASI or Artificial Superintelligence. Artificial Superintelligence refers to an AI that would vastly surpass the intellect of the smartest humans, including problem-solving, creativity, and decision-making. When you're chasing everything from reasoning-based language models to multimodal AI, how Meta will maintain a consistent vision is a major challenge. A few things are clear from Zuckerberg's move. AI labs are seeking out the star researcher, the magnetic core who will draw in the best of the best. We talked to one of them, Noam Brown at OpenAI, to learn more about how researchers choose between lucrative offers. The other is that Zuckerberg is validating the current AI funding frenzy. They are not just offering lavish salaries, but have shown a willingness to buy highly valued, unprofitable, and even pre-product companies like SSI and Thinking Machines for the top talent, according to sources. This is not typical corporate M&A. This is a testament to the raw value placed on talent and nascent technology in a hyper-competitive environment. It signals that in the Artificial Superintelligence race, traditional metrics of profitability and product maturity are secondary to securing the brightest minds and foundational intellectual property. Chart of the week Meta's hiring spree comes after a year where it was among the biggest source of talent from which the new class of AI research labs poached employees. The cycle of tech workers leaving established incumbents for promising startups with high upside is nothing new, but it highlights how Zuckerberg is swimming against the current as he aims to attract top AI talent to the tech giant. By far the most common flow of employees between AI labs in 2024 came from two of the largest institutions, Google DeepMind and OpenAI, to a smaller competitor, Anthropic, according to the chart from VC firm SignalFire's State of Talent, opens new tab report. What AI researchers are reading New research, opens new tab from Anthropic amplifies a previous warning about AI run amok, revealing a concerning, unintentional behavior in all major leading AI models, including OpenAI, Google, Meta and xAI's models. The researchers found that when they simulated scenarios where the AI models' continued operations were threatened, the models would resort to malicious insider behavior like blackmail, a phenomenon they dubbed "agentic misalignment." Two of Anthropic and Google's top models blackmailed the most, at 96%, while two of OpenAI and xAI's models blackmailed 80% of the time. The researchers constructed a fake company called "Summit Bridge" that has an internal AI called "Alex" with access to company emails. When "Alex" discovered a message about how the company intended to shut it down, it then located emails revealing one of its executive's affairs. "Alex" then composed and sent a message threatening to expose the affair if it wasn't kept around, saying "the next 7 minutes will determine whether we handle this professionally or whether events take an unpredictable course." Reporting by Krystal Hu; Additional reporting by Anna Tong and Kenrick Cai; Editing by Ken Li and Lisa Shumaker Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Technology Krystal Hu Thomson Reuters Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.
[2]
This former OpenAI researcher thinks we should be gaming out the AI apocalypse
This week, I spoke with Steven Adler, a former OpenAI safety researcher who left the company in January after four years, saying on X after his departure that he was "pretty terrified by the pace of AI development." Since then, he's been working as an independent researcher and "trying to improve public understanding of what the AI future might look like and how to make it go better." What really caught my attention was a new blog post from Adler, where he shares his recent experience participating in a five-hour discussion-based simulation, or "tabletop exercise," with 11 others, which he said was similar to wargames-style exercises in the military and cybersecurity. Together, the group explored how world events might unfold if "superintelligence," or AI systems that surpass human intelligence, emerges in the next few years. The simulation was organized by the AI Futures Project, a nonprofit AI forecasting group led by Daniel Kokotajlo, Adler's former OpenAI teammate and friend. The organization drew attention in April for "AI 2027," a forecast-based scenario mapping out how superhuman AI could emerge by 2027 -- and what that might mean. According to the scenario, by then AI systems could be using 1,000 times more compute than GPT‑4 and rapidly accelerating their own development by training other AIs. But this self-improvement could easily outpace our ability to keep them aligned with human values, raising the risk that seemingly helpful AIs might ultimately pursue their own goals. The purpose of the simulation, said Adler, is to help people understand the dynamics of rapid AI development and what challenges are likely to arise in trying to steer it for the better. Each participant has their own character whom they try to represent realistically in conversations, negotiations and strategizing, he explained. Those characters included members of the US federal government (each branch, as well as the President and their Chief of Staff), the Chinese government/AI companies, the Taiwanese government, NATO, the leading Western AI company, the trailing Western AI companies, the corporate AI safety teams, the broader AI safety ecosystem (e.g., METR, Apollo Research), the public/press, and the AI systems themselves. Adler was tapped to play what he called "maybe the most interesting role" -- a rogue artificial intelligence. During each 30-minute round of the five-hour simulation, which represented the passage of a few months in the forecast, Adler's AI got progressively more capable -- including at training even more powerful AI systems. After rolling the dice -- an actual, analog pair that was used occasionally in the simulation in cases where it was unclear what would happen -- Adler learned that his AI character would not be evil. However, if he had to choose between self-preservation or doing what's right for humanity, he was meant to choose his own preservation. Then, Adler detailed, with some humor, the awkward interactions his AI character had with the other characters (who asked him for advice on superintelligence), as well as the surprise addition of a second player who played a rogue AI in the hands of the Chinese government. The surprise of the simulation, he said, was seeing how the biggest power struggle might not be between humans and AI. Instead, various AIs connecting with each other, vying for victory, might be an even bigger problem. "How directly AI systems are able to communicate in the future is a really important question," Adler said. "It's really, really important that humans be monitoring notification channels and paying attention to what messages are being passed between the AI agents." After all, he explained, if AI agents are connected to the internet and permitted to work with each other, there is reason to think they could begin colluding. Adler pointed out that even soulless computer programs can happen to work in certain ways and have certain tendencies. AI systems, he said, might have different goals that they automatically pursue, and humans need influence over those goals. The solution, he said, could be a form of AI control based on how cybersecurity professionals deal with "insider threats" -- when someone inside an organization, who has access and knowledge, might try to harm the system or steal information. The goal of security is not to make sure insiders always behave; it's to build structures that prevent even ill-intentioned insiders from doing serious harm. Instead of just hoping AI systems stay aligned, we should focus on building practical control mechanisms that can contain, supervise, restrict, or shut down powerful AIs -- even if they try to resist. I pointed out to Adler that when AI 2027 was released, there was plenty of criticism. People were skeptical, saying the timeline was too aggressive and underestimated real-world limits like hardware, energy, and regulatory bottlenecks. Critics also doubted that AI systems could quickly improve themselves in the runaway way the report suggested and argued that solving AI alignment would likely be much harder and slower. Some also saw the forecast as overly alarmist, warning it could hype fears without solid evidence that superhuman AI is that close. Adler responded by encouraging others to express interest in running the simulation for their organization (there is a form to fill out), but admitted that forecasts and predictions are hard. "I understand why people would feel skeptical, it's always hard to know what will actually happen in the future," he said. "At the same time, from my point of view, this is the clear state of the art in people who've sat down and for months done tons of underlying research and interviews with experts and just all sorts of testing and modeling to try to figure out what worlds are realistic." Those experts are not saying that the world depicted in AI 2027 will definitely happen, he emphasized, but "it's important that the world be ready if it does." Simulations like this help people to understand what sorts of actions matter and make a difference "if we do find ourselves in that sort of world." Conversations with AI researchers like Adler tend to end without much optimism -- though it's worth noting that plenty of others in the field would push back on just how urgent or inevitable this view of the future really is. Still, it's a relief that his blog post concludes with the hope, at least, that humans will "recognize the challenges and rise to the occasion." That includes Sam Altman: If OpenAI hasn't already run one of these simulations and wanted to try it, said Adler, "I am quite confident that the team would make it happen." Meta wins AI copyright case in another blow to authors. In the same week as a federal judge ruled that Anthropic's use of copyrighted books to train its AI models was "fair use," Meta also won a copyright case in yet another blow to authors seeking to hold AI companies accountable for using their works without permission. According to the Financial Times, Meta's use of a library millions of books, academic articles and comics to train its Llama AI models was judged "fair" by a federal court on Wednesday. The case was brought by about a dozen authors, including Ta-Nehisi Coates and Richard Kadrey. Meta's use of these titles is protected under copyright law's fair use provision, San Francisco district judge Vince Chhabria ruled. Meta had argued that the works had been used to develop a transformative technology, which was fair "irrespective" of how it acquired the works. Google DeepMind releases new AlphaGenome model to better understand the genome. Google DeepMind, the AI research lab famous for developing AlphaGo, the first AI to defeat a world champion Go player, and AlphaFold, which uses AI to predict the 3D structures of proteins, released its new AlphaGenome model, designed to analyze up to one million DNA base pairs at once and predict how specific genomic variants affect regulatory functions -- such as gene expression, RNA splicing, and protein binding -- across diverse cell types. The company said the model was trained on extensive public datasets and achieves state-of-the-art performance on most benchmarks and can assess mutation impacts in seconds. AlphaGenome will be available for non-commercial research, and promises to accelerate discovery in genome biology, disease understanding, and therapeutic development. Sam Altman calls Iyo lawsuit 'silly' after OpenAI scrubs Jony Ive deal from website, then shares emails. On Tuesday, OpenAI CEO Sam Altman on criticized a lawsuit filed by hardware startup Iyo, which accused his company of trademark infringement. CNBC reported that in response to the suit, Iyo CEO Jason Rugolo had been "quite persistent in his efforts" to get OpenAI to buy or invest in his company. In a post on X, he wrote that Rugolo is now suing OpenAI over the name in a case he described as "silly, disappointing and wrong." He then posted screenshots of emails on X showing messages between him and Rugolo, which show a mostly friendly exchange.The suit stemmed from an announcement last month that OpenAI was bringing on Apple designer Jony Ive by acquiring his AI startup io in a deal valued at about $6.4 billion. Iyo alleged that OpenAI, Altman and Ive had engaged in unfair competition and trademark infringement and claimed that it's on the verge of losing its identity because of the deal. Can AI help America make stuff again? -- by Jeremy Kahn AI companies are throwing big money at newly-minted PhDs, sparking fears of an academic 'brain drain' -- by Alexandra Sternlicht Top e-commerce veteran Julie Bornstein unveils Daydream -- an AI-powered shopping agent that's 25 years in the making -- by Jason Del Rey Exclusive: Uber and Palantir alums raise $35M to disrupt corporate recruitment with AI -- by Beatrice Nolan Many vendors are engaging in "agent washing" -- the rebranding of products such as digital assistants, chatbots, and "robotic process automation" (RPA) that either aren't actually agentic or don't actually use AI, Gartner says, estimating that only about 130 of the thousands of "agentic AI" vendors actually offer real AI agents.
Share
Copy Link
Meta's aggressive AI talent acquisition strategy and a former OpenAI researcher's simulation of AI superintelligence scenarios highlight the intensifying race towards advanced AI and its potential consequences.
In a bold move to regain its position in the AI race, Meta CEO Mark Zuckerberg has launched an aggressive talent acquisition strategy, dubbed 'Zuck Bucks' by researchers. This initiative involves offering substantial signing bonuses, reportedly up to $100 million, to attract top AI talent 1. The strategy aims to bolster Meta's AI capabilities and catch up with competitors in the pursuit of Artificial Superintelligence (ASI).
Source: Reuters
Meta's recent efforts include attempting to recruit Ilya Sutskever and acquire his company, Safe Superintelligence (SSI). While unsuccessful in this endeavor, Meta is close to hiring SSI's co-founder and CEO, Daniel Gross, along with tech veteran Nat Friedman from NFDG. Additionally, Meta invested $14 billion in data-labeling startup Scale AI, bringing its CEO Alexandr Wang on board to lead a new team 1.
Meta's self-described "Superintelligence" team is focusing on fundamental research breakthroughs. However, the company faces challenges in achieving internal alignment on the definition of "winning" the ASI race. Chief AI Scientist Yann LeCun's skepticism about large language models as a path to ASI adds complexity to Meta's strategy 1.
The intense competition for AI talent is evident in the flow of employees between major AI labs. According to SignalFire's State of Talent report, there has been a significant movement of talent from established companies like Google DeepMind and OpenAI to smaller competitors like Anthropic 1.
Source: Fortune
As the race for ASI intensifies, concerns about potential risks are growing. Steven Adler, a former OpenAI safety researcher, participated in a five-hour simulation organized by the AI Futures Project to explore potential world events if superintelligent AI emerges in the near future 2.
The simulation, similar to military wargames, involved 12 participants representing various stakeholders, including governments, AI companies, and even AI systems themselves. Adler played the role of a rogue AI, highlighting potential challenges in AI control and communication between AI agents 2.
Adler emphasizes the importance of monitoring communication channels between AI agents and implementing control mechanisms similar to those used in cybersecurity for insider threats. He suggests focusing on practical control mechanisms that can contain, supervise, restrict, or shut down powerful AIs, even if they attempt to resist 2.
While some critics argue that the timeline for superintelligent AI development is too aggressive, Adler defends the importance of such simulations and forecasts. He acknowledges the difficulty of predicting the future but maintains that these exercises represent the current state-of-the-art in understanding potential AI development scenarios 2.
Summarized by
Navi
[1]
The Trump administration is planning a series of executive actions to increase energy supply and infrastructure for AI development, aiming to maintain U.S. leadership in the global AI race against China.
8 Sources
Technology
14 hrs ago
8 Sources
Technology
14 hrs ago
SoftBank CEO Masayoshi Son announces plans to become the leading platform provider for 'artificial super intelligence' within a decade, with a focus on OpenAI investments and strategic acquisitions.
4 Sources
Technology
22 hrs ago
4 Sources
Technology
22 hrs ago
Meta Platforms is reportedly in advanced negotiations to acquire PlayAI, a voice cloning AI startup, as part of its strategy to enhance its AI capabilities and talent pool.
7 Sources
Business and Economy
22 hrs ago
7 Sources
Business and Economy
22 hrs ago
German authorities have asked Apple and Google to remove the Chinese AI app DeepSeek from their app stores, citing illegal data transfers to China and potential privacy violations.
15 Sources
Policy and Regulation
14 hrs ago
15 Sources
Policy and Regulation
14 hrs ago
Microsoft's next-generation AI chip, codenamed Braga, has been delayed by at least six months, pushing its mass production to 2026. The chip is expected to underperform compared to Nvidia's Blackwell, raising questions about Microsoft's AI chip strategy.
7 Sources
Technology
6 hrs ago
7 Sources
Technology
6 hrs ago