2 Sources
2 Sources
[1]
Sam Altman throws shade at Anthropic's cyber model, Mythos: 'fear-based marketing' | TechCrunch
OpenAI and Anthropic continue to take swipes at each other. This week, during a podcast appearance, OpenAI CEO Sam Altman called out his competitor's new cybersecurity model, noting that the company was using fear to make its product sound more impressive than it actually is. Anthropic announced Mythos earlier this month, releasing the model to a small cohort of enterprise customers. The company has claimed that Mythos is too powerful to be released to the public out of concern that cybercriminals will weaponize it. Critics have said this rhetoric is overblown. During an appearance on the podcast "Core Memory," Altman implied that Anthropic's "fear-based marketing" was a good way to keep AI in the hands of a small and exclusive elite. "There are people in the world who, for a long time, have wanted to keep AI in the hands of a smaller group of people," he said. "You can justify that in a lot of different ways." "It is clearly incredible marketing to say, 'We have built a bomb, we are about to drop it on your head. We will sell you a bomb shelter for $100 million,'" he added. Fear-based marketing was not invented by Anthropic. Arguably, much of the AI industry has leveraged scare tactics and hyperbole to make its tools sound powerful. Ongoing rhetoric about how AI may lead to the end of the world hasn't just come from luddite doomer activists; it has also come from the people selling this technology to the public -- Altman included.
[2]
OpenAI CEO Sam Altman Slams Anthropic's 'Fear-Based Marketing' Strategy For Claude Mythos: 'We Have Built
OpenAI CEO Sam Altman criticized the cybersecurity marketing strategy of rival company, Anthropic for its newly launched cybersecurity product, Claude Mythos. "You can justify that in a lot of different ways," said Altman. Altman further used a metaphor to illustrate his point, likening Anthropic's strategy to selling a bomb shelter while threatening to drop a bomb. 'We have built a bomb, we are about to drop it on your head. We will sell you a bomb shelter for $100 million," he added. Anthropic AI Raises Cybersecurity Fears In the following week, Anthropic unveiled Claude Opus 4.7 to test new cyber capabilities, saying it is less advanced than Mythos Preview as part of a phased safety rollout. Last week, Barclays CEO Venkatakrishnan flagged Mythos as a potential catalyst for cyberattacks on global banks. He called it a "serious issue" and warned that Mythos was just the beginning, with more advanced systems likely to emerge rapidly. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Image via Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
Share
Share
Copy Link
OpenAI CEO Sam Altman slams Anthropic's marketing approach for its new cybersecurity AI model, Claude Mythos, comparing it to selling bomb shelters while threatening to drop bombs. The public dispute between OpenAI and Anthropic intensifies as Altman accuses his competitor of using scare tactics to keep AI in elite hands, even as critics note the entire AI industry leverages similar hyperbole.
The public dispute between OpenAI and Anthropic has escalated with OpenAI CEO Sam Altman directly criticizing his competitor's cybersecurity marketing strategy during a recent podcast appearance. Speaking on the "Core Memory" podcast, Sam Altman took aim at Anthropic's approach to promoting its new cybersecurity AI model, Claude Mythos, characterizing it as fear-based marketing designed to centralize AI control
1
. Altman's sharp critique comes as Anthropic released Mythos earlier this month to a limited group of enterprise customers, claiming the model is too powerful for public release due to concerns that cybercriminals could weaponize it.
Source: Benzinga
Altman didn't mince words when describing what he sees as manipulation tactics. "There are people in the world who, for a long time, have wanted to keep AI in the hands of a smaller group of people," he stated, suggesting that Anthropic's approach serves to maintain exclusive AI control rather than genuinely protect the public
1
. He used a vivid metaphor to illustrate his point about the bomb shelter analogy: "It is clearly incredible marketing to say, 'We have built a bomb, we are about to drop it on your head. We will sell you a bomb shelter for $100 million'"2
. This pointed criticism suggests Altman believes Anthropic is leveraging scare tactics to make its product appear more impressive and valuable than it actually is.Anthropic's Mythos has already generated significant anxiety within the financial sector. Barclays CEO Venkatakrishnan recently flagged Mythos as a potential catalyst for cyberattacks on global banks, describing it as a "serious issue" and warning that more advanced systems are likely to emerge rapidly
2
. Following the initial release, Anthropic unveiled Claude Opus 4.7 to test new cyber capabilities, positioning it as less advanced than Mythos Preview as part of a phased safety rollout2
. Critics have argued that the rhetoric surrounding Mythos is overblown, though the concerns from major financial institutions suggest the model's capabilities are being taken seriously by potential targets.Related Stories
While Altman's criticism focuses on Anthropic, observers have noted the irony in his accusations. Fear-based marketing was not invented by Anthropic, and much of the AI industry has leveraged hyperbole and scare tactics to make its tools sound powerful
1
. Ongoing rhetoric about how AI may lead to the end of the world hasn't just come from skeptics or activists; it has also come from the people selling this technology to the public, including Altman himself1
. This tension highlights a broader question facing the AI industry: how to communicate genuine risks without resorting to sensationalism, and whether companies are using safety concerns as competitive advantages rather than addressing them transparently. As AI capabilities advance, particularly in sensitive domains like cybersecurity, the balance between responsible disclosure and marketing strategy will remain contentious, with competitors watching closely to see who crosses the line between caution and exploitation.Summarized by
Navi
[1]
04 Feb 2026•Technology

14 Apr 2026•Technology

15 Apr 2026•Policy and Regulation

1
Policy and Regulation

2
Technology

3
Technology
