5 Sources
[1]
Sam Altman throws shade at Anthropic's cyber model, Mythos: 'fear-based marketing' | TechCrunch
OpenAI and Anthropic continue to take swipes at each other. This week, during a podcast appearance, OpenAI CEO Sam Altman called out his competitor's new cybersecurity model, noting that the company was using fear to make its product sound more impressive than it actually is. Anthropic announced Mythos earlier this month, releasing the model to a small cohort of enterprise customers. The company has claimed that Mythos is too powerful to be released to the public out of concern that cybercriminals will weaponize it. Critics have said this rhetoric is overblown. During an appearance on the podcast "Core Memory," Altman implied that Anthropic's "fear-based marketing" was a good way to keep AI in the hands of a small and exclusive elite. "There are people in the world who, for a long time, have wanted to keep AI in the hands of a smaller group of people," he said. "You can justify that in a lot of different ways." "It is clearly incredible marketing to say, 'We have built a bomb, we are about to drop it on your head. We will sell you a bomb shelter for $100 million,'" he added. Fear-based marketing was not invented by Anthropic. Arguably, much of the AI industry has leveraged scare tactics and hyperbole to make its tools sound powerful. Ongoing rhetoric about how AI may lead to the end of the world hasn't just come from luddite doomer activists; it has also come from the people selling this technology to the public -- Altman included.
[2]
OpenAI Is Jealous
OpenAI does not like to be left out. The week after Anthropic announced Claude Mythos Preview -- an AI model that has put governments around the world on edge because of its potential ability to hack into banks, energy grids, and military systems -- OpenAI shared a program that is uncannily similar. And just like Anthropic did with its model, OpenAI has, for cybersecurity purposes, restricted access to this new bot, called GPT-5.4-Cyber, to a small group of trusted users. This sequence has become something of a pattern: First Anthropic will make an announcement, and then OpenAI will follow suit. Last year, Anthropic launched Claude Code, an AI coding tool. A couple of months later, OpenAI came out with its own version, Codex. When Claude Code had a breakout moment in January, OpenAI responded with two major updates to Codex alongside a press blitz for the product. And earlier this month, OpenAI released a version of Codex that allows it to use other apps on your desktop -- similar to an existing Anthropic tool called Claude Cowork. Until recently, Anthropic -- founded by a group of former OpenAI employees in 2021 -- played the role of younger brother. OpenAI kicked off the entire AI boom with the release of ChatGPT, and has had more users, funding, and name recognition ever since. But Anthropic has been riding high on the explosive popularity of Claude Code and booming sales of its AI models to large corporations. The firm's showdown with the Pentagon has also helped vault it into the public eye. In early April, Anthropic said its revenue rate had hit $30 billion a year -- appearing to surpass OpenAI's. In its public messaging, OpenAI has been indifferent or even somewhat derogatory toward Anthropic. Last week, when OpenAI released its newest model, GPT-5.5, the announcement was paired with direct and veiled references to how it beat out Anthropic's latest, Claude Opus 4.7. But internally, the firm is seemingly on edge. In a recent leaked company-wide memo, Denise Dresser, OpenAI's chief revenue officer, felt the need to address one particular competitor: "Here are a few things worth keeping in mind, especially on Anthropic." The rival firm's product offerings are narrow, Dresser wrote, and "their story is built on fear," referencing Anthropic's loud messaging about the dangers of AI. "Our positive message will win over time." (OpenAI, which has a business partnership with The Atlantic, did not respond to a request for comment. Anthropic also did not respond to a request for comment.) If imitation is the sincerest form of flattery, OpenAI's actions are especially telling. At every turn, OpenAI has appeared eager to copy the success of its rival. For starters, as Anthropic's explicit focus on mitigating the risks of AI has apparently won the trust of many consumers, OpenAI has imitated many of its rival's safety initiatives. In early 2026, after Anthropic published a major update to Claude's "Constitution," a document that tells the AI model how to behave, OpenAI launched a major campaign around its equivalent document. But OpenAI's most important, Anthropic-esque pivot has been in its business model. Early on, these two companies made fundamentally different bets on how they would eventually make money. OpenAI positioned itself as a consumer behemoth, hoping to capitalize on ChatGPT's hundreds of millions of users. Last fall, the company launched the AI-video app Sora and an AI-powered web browser. OpenAI has made forays into e-commerce and is testing ads in ChatGPT. Every now and then, the company teases the AI device that it is developing with the former Apple designer Jony Ive. Anthropic, meanwhile, has focused on the less flashy goal of selling its AI tools to businesses and software engineers. Despite OpenAI's numerous advantages, Anthropic's focus on code and business customers seems to be winning. Although OpenAI is worth more based on the most recent fundraising rounds, Anthropic now has a higher valuation -- more than $1 trillion -- in some private markets. Anthropic's explosive growth is particularly important as the two companies both race to go public, in turn accessing a huge pool of new investors, and try to prove they will eventually be profitable. (Both companies still have a long way to go in that regard.) OpenAI is now eager to catch up. In December, OpenAI hired Dresser, a former CEO of Slack, to pursue more business customers. In late January, Altman gathered several major executives for a lavish dinner in San Francisco to preview all of the business offerings his company was planning, according to The Information. The company has since made a blitz of announcements around coding tools and enterprise AI offerings, including a new set of "Frontier Alliances": partnerships with several of the world's premier consulting firms, including McKinsey & Company and Boston Consulting Group, to accelerate enterprise adoption of ChatGPT. In mid-March, another internal OpenAI memo reportedly stated that the company needed to eliminate "side quests" and focus on the enterprise and coding markets. Anthropic's success in those areas, the memo stated, should be a "wake-up call" for OpenAI. The firm also scrapped Sora and has been aggressively advertising and messaging about Codex for months now. "I am happy everyone is switching to Codex," Altman wrote on X earlier this month. Read: OpenAI is doing everything ... poorly OpenAI's pivot to its enterprise business has not been total. It did, for instance, recently shell out reportedly hundreds of millions of dollars to acquire a niche tech podcast. And Anthropic, for its part, has had to take some cues from OpenAI -- notably by making big and expensive data-center deals, such as an expansion in its partnership with Amazon Web Services. Anthropic's CEO, Dario Amodei, has previously insinuated that OpenAI has made such deals "because it sounds cool." Which company will win the AI race is anybody's guess. Regardless, OpenAI's embrace of the Anthropic business model makes one thing abundantly clear: For all the wonder and change that generative AI brings as a technology, there hasn't been any real innovation in the business models of Silicon Valley. For decades, most tech companies have succeeded by either selling ads (the route of Meta and Google) or selling enterprise tools (like Salesforce and Slack). One day OpenAI or Anthropic might cure cancer and remake the world, but for now they still have to pay the bills.
[3]
Anthropic's growing pains mount ahead of OpenAI showdown
Why it matters: The AI darling, whose revenue has tripled to $30 billion this year on the back of its wildly popular coding tools, has never been more valuable or more vulnerable. * Chief rival OpenAI senses opportunity in the Claude maker's recent stumbles, courting frustrated developers and pitching itself as the steadier alternative ahead of dueling IPOs. Zoom in: Anthropic's problems over the past two months span nearly every part of its business -- product quality, pricing, security and capacity -- and are starting to compound. 1. Model backlash: Perceived declines in Opus 4.6 performance triggered an initial wave of suspicion, with some developers accusing Anthropic of quietly downgrading its flagship model. * Its newest model, Opus 4.7, delivered major benchmark gains but drew a mixed public reception, as some users complained of higher token costs, bugs and uneven performance. 2. Capacity crunch: Surging demand is straining Anthropic's compute, with users running into tighter limits and periodic outages -- a red flag for companies that have grown reliant on Claude. 3. Security scares: A software update accidentally exposed internal Claude Code files, handing outsiders a window into Anthropic's most valuable product and raising questions about its internal safeguards. * Anthropic is now investigating reports that a small group of unauthorized users accessed Mythos -- its most powerful model, withheld over safety concerns about its offensive cyber capabilities. 4. Product confusion: Some users discovered on Tuesday that Claude Code was no longer available on the $20/month Pro plan -- a potentially major shift affecting Anthropic's most popular product and its most accessible tier. * Facing massive backlash, the company said the move was part of a limited test affecting a small share of users, though that explanation did little to ease fears of broader pricing changes. Reality check: Anthropic's business is still booming. * Even as the company pushes enterprise clients toward usage-based pricing, demand hasn't slowed -- nor has revenue. * Anthropic's standoff with the Pentagon endeared it to AI safety advocates and Trump critics, helping drive a spike in usage that briefly sent Claude to the top of the U.S. App Store. * "We've seen extraordinary demand for Claude over the past several months, and our team is doing everything we can to scale quickly and responsibly," an Anthropic spokesperson told Axios. "We know it hasn't always been smooth, and we're grateful to our community for the patience and feedback as we work through it." The big picture: The stakes go far beyond a few product missteps. Anthropic and OpenAI are locked in a race to define the enterprise AI market and to convince investors they deserve massive IPO valuations. * Anthropic's rapid rise has been fueled by cutting-edge products, developer trust and a reputation for discipline. Now, even small stumbles risk chipping away at that foundation. What we're watching: OpenAI, no stranger to "code red" moments in the breakneck AI race, is seizing on its competitor's growing pains. * A leaked memo from OpenAI chief revenue officer Denise Dresser blasted Anthropic as elitist and alleged that the company had overstated its revenue run rate by billions. * CEO Sam Altman accused Anthropic of "fear-based marketing" in a podcast appearance this week, taking aim at its tightly controlled Mythos rollout. * And as Anthropic grappled with user backlash from its Claude Code pricing confusion, OpenAI engineers -- egged on by Altman -- openly mocked their rival on social media. The bottom line: Both Altman and Anthropic CEO Dario Amodei used to signal that the AI race should have multiple winners. Their tone and tactics now suggest otherwise.
[4]
Anthropic Using 'Fear-Based Marketing' to Promote Claude Mythos: Sam Altman - Decrypt
Governments and researchers have warned of both defensive and offensive risks involving Mythos. OpenAI CEO Sam Altman pushed back against growing alarm over rival Anthropic's powerful new AI model Claude Mythos, suggesting the company is using "fear" to market the product. Speaking on the Core Memory podcast hosted by tech journalist Ashlee Vance, Altman argued that the use of "fear-based marketing" was geared towards keeping AI in the hands of a "smaller group of people." "You can justify that in a lot of different ways, and some of it's real, like there are going to be legitimate safety concerns," Altman said. "But if what you want is like 'we need control of AI, just us, because we're the trustworthy people', I think fear-based marketing is probably the most effective way to justify that." Altman added that while there are valid concerns about AI safety, "it is clearly incredible marketing to say: 'We have built a bomb. We are about to drop it on your head. We will sell you a bomb shelter for $100 million. You need it to run across all your stuff, but only if we pick you as a customer.'" He noted that it was "not always easy" to balance AI's new capabilities with OpenAI's belief that the technology should be accessible. Anthropic's Claude Mythos model, revealed last month, has drawn intense attention from researchers, governments and the cybersecurity industry, particularly after testing suggested it can autonomously identify software vulnerabilities and execute complex cyber operations. The model is being distributed only to a limited set of organizations through a restricted program. The rollout reflects a broader divide in the AI industry over how powerful systems should be deployed, with some companies emphasizing controlled access and others arguing for wider distribution to accelerate innovation and understanding of the technology. Mythos has become a focal point in that debate. The model's capabilities have been framed by Anthropic as both a defensive breakthrough -- allowing faster detection of critical software flaws -- and a potential offensive risk if misused. Early this month, it identified hundreds of vulnerabilities in Mozilla's Firefox browser during testing and has also demonstrated the ability to carry out multi-stage cyberattack simulations. Anthropic has restricted access to the system via Project Glasswing, granting select companies including Amazon, Apple and Microsoft the ability to test its capabilities. The company has also committed significant resources to supporting open-source security efforts, arguing that defenders should benefit from the technology before it becomes more widely available. The model has also exposed limitations in existing AI evaluation systems, with Anthropic acknowledging that many current cybersecurity benchmarks are no longer sufficient to measure the capabilities of its latest system. That said, a group of researchers claimed last week they were able to reproduce Mythos' findings using publicly available models. Despite calls within parts of the U.S. government to halt use of the technology over concerns about its potential applications in warfare and surveillance, the National Security Agency has reportedly begun testing a preview version of the model on classified networks. On prediction market Myriad, owned by Decrypt's parent company Dastan, users put a 49% chance on Claude Mythos being released to the wider public by June 30. Altman suggested that rhetoric around highly dangerous AI systems may increase as capabilities improve, but argued that not all such claims should be taken at face value. "There will be a lot more rhetoric about models that are too dangerous to release. There will also be very dangerous models that will have to be released in different ways," he said. "I'm sure Mythos is a great model for cybersecurity but I think we have a plan we feel good about for how we put this kind of capability out into the world." Altman also dismissed suggestions that OpenAI is scaling back its infrastructure spending, saying the company would continue expanding its computing capacity despite shifting narratives. "I don't know where that's coming from... people really want to write the story of pulling back," he said. "But very soon it will be again, like, 'OpenAI is so reckless. How can they be spending this crazy amount?'"
[5]
OpenAI CEO Sam Altman Slams Anthropic's 'Fear-Based Marketing' Strategy For Claude Mythos: 'We Have Built
OpenAI CEO Sam Altman criticized the cybersecurity marketing strategy of rival company, Anthropic for its newly launched cybersecurity product, Claude Mythos. "You can justify that in a lot of different ways," said Altman. Altman further used a metaphor to illustrate his point, likening Anthropic's strategy to selling a bomb shelter while threatening to drop a bomb. 'We have built a bomb, we are about to drop it on your head. We will sell you a bomb shelter for $100 million," he added. Anthropic AI Raises Cybersecurity Fears In the following week, Anthropic unveiled Claude Opus 4.7 to test new cyber capabilities, saying it is less advanced than Mythos Preview as part of a phased safety rollout. Last week, Barclays CEO Venkatakrishnan flagged Mythos as a potential catalyst for cyberattacks on global banks. He called it a "serious issue" and warned that Mythos was just the beginning, with more advanced systems likely to emerge rapidly. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Image via Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
Share
Copy Link
OpenAI CEO Sam Altman criticized Anthropic's marketing strategy for Claude Mythos, calling it fear-based tactics to keep AI in elite hands. The accusation highlights deepening tensions between the AI giants as they compete for enterprise clients and prepare for public offerings, with Anthropic's revenue hitting $30 billion annually.

The AI industry rivalry between OpenAI and Anthropic has escalated into open warfare, with OpenAI CEO Sam Altman publicly condemning his competitor's approach to marketing Claude Mythos, a powerful cybersecurity AI model. During an appearance on the Core Memory podcast, Altman accused Anthropic of using fear-based marketing to justify keeping AI technology in the hands of a select few
1
. "It is clearly incredible marketing to say, 'We have built a bomb, we are about to drop it on your head. We will sell you a bomb shelter for $100 million,'" Altman stated, suggesting that Anthropic's restricted release strategy serves commercial interests rather than genuine AI safety concerns4
.Anthropic announced Claude Mythos earlier this month, releasing the cybersecurity AI model to a small cohort of enterprise customers through Project Glasswing, including Amazon, Apple, and Microsoft
4
. The company claims the model possesses cyber capabilities too powerful for public release, citing concerns that cybercriminals could weaponize it for cyberattacks1
. Testing has demonstrated that Mythos can autonomously identify software vulnerabilities and execute complex cyber operations, with early assessments showing it identified hundreds of vulnerabilities in Mozilla's Firefox browser4
. Barclays CEO Venkatakrishnan flagged the model as a potential catalyst for attacks on global banks, calling it a "serious issue"5
. However, Sam Altman criticism suggests these warnings may be overstated, with the OpenAI leader arguing that not all claims about highly dangerous AI systems should be taken at face value4
.The confrontation over Claude Mythos represents just the latest chapter in an intensifying AI industry rivalry. OpenAI released GPT-5.4-Cyber shortly after Anthropic's announcement, following a pattern where Anthropic makes product launches and OpenAI quickly follows suit
2
. This sequence played out with coding tools when Anthropic launched Claude Code and OpenAI responded with Codex, then again when Anthropic introduced Claude Cowork and OpenAI released a similar desktop integration feature2
. A leaked internal memo from OpenAI chief revenue officer Denise Dresser acknowledged the competitive pressure, stating that Anthropic's "story is built on fear" while insisting "our positive message will win over time"2
.Related Stories
Anthropic has achieved remarkable growth, with its revenue rate hitting $30 billion annually in early April, apparently surpassing OpenAI's figures
2
. The company's market valuation has exceeded $1 trillion in some private markets, despite OpenAI maintaining a higher valuation in recent fundraising rounds2
. This success has come primarily through Anthropic's focus on enterprise clients and developer trust, particularly around its coding tools3
. Yet the rapid expansion has exposed growing pains across multiple fronts. Users have reported perceived declines in Claude Opus 4.6 performance, capacity crunches causing periodic outages, and security breaches including a software update that accidentally exposed internal Claude Code files3
. Anthropic is also investigating reports that unauthorized users accessed Claude Mythos, raising questions about internal safeguards for its most restricted AI release3
.The stakes extend far beyond rhetorical sparring, as both companies race to dominate the enterprise AI market and prove their worthiness for massive IPOs
3
. OpenAI has pivoted its strategy to match Anthropic's enterprise focus, hiring former Slack CEO Denise Dresser in December to pursue more business customers2
. The company announced "Frontier Alliances" with premier consulting firms including McKinsey & Company and Boston Consulting Group to accelerate ChatGPT adoption among enterprise clients2
. Meanwhile, Dario Amodei's Anthropic faces mounting pressure to maintain developer trust while scaling infrastructure to meet surging demand. An Anthropic spokesperson acknowledged the challenges, stating: "We've seen extraordinary demand for Claude over the past several months, and our team is doing everything we can to scale quickly and responsibly"3
. As both CEOs abandon earlier rhetoric about multiple winners in the AI race, their competing visions for AI safety concerns, restricted AI release policies, and access to powerful cyber capabilities will shape the industry's trajectory and influence AI evaluation systems for years to come.Summarized by
Navi
[1]
[2]
28 Oct 2025•Technology
04 Feb 2026•Technology

13 Apr 2026•Business and Economy

1
Technology

2
Policy and Regulation

3
Policy and Regulation
