47 Sources
[1]
Trump officials may be encouraging banks to test Anthropic's Mythos model | TechCrunch
Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned bank executives for a meeting this week where they encouraged the executives to use Anthropic's new Mythos model to detect vulnerabilities, according to Bloomberg. Indeed, while JPMorgan Chase was the only bank listed as one of the initial partner organizations with access to the model, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are reportedly testing Mythos as well. Anthropic announced the model this week but said it would be limiting access for now, in part because Mythos -- despite not being trained specifically for cybersecurity -- is too good at finding security vulnerabilities. (Others suggested this was hype or simply a smart enterprise sales strategy.) The report is particularly surprising since Anthropic is currently battling the Trump administration in court over the Department of Defense's designation of Anthropic as a supply-chain risk; that designation came after negotiations fell apart over Anthropic's efforts to limit how its AI models can be used by the government. Meanwhile, the Financial Times reports that U.K. financial regulators are also discussing the risk posed by Mythos.
[2]
How Anthropic Learned Mythos Was Too Dangerous for the Wild
One balmy February evening in Bali, Nicholas Carlini stepped away between events at a wedding, opened his laptop, and set out to do some damage. Anthropic PBC had just made a new artificial intelligence model, called Mythos, available for internal review, and Carlini -- a well-known AI researcher -- intended to see what kind of trouble it could cause. Anthropic pays Carlini to stress-test its AI models to see whether hackers could leverage them for espionage, theft or sabotage. From Bali, where Carlini and his wife were attending an Indian wedding, he was staggered at what the model could do. Within hours Carlini found numerous techniques to infiltrate systems used around the world. Once Carlini was back in Anthropic's downtown San Francisco office, he discovered Mythos was able to autonomously create powerful break-in tools, including against Linux, the open-source code that underpins most of modern computing. Mythos orchestrated the digital equivalent of a bank robbery: getting past security protocols and through the front door of networks, and breaking into digital vaults that gave it access to online treasures. AI had picked locks, but now it could pull off an entire heist. Carlini and some of his colleagues began alerting staff to what they'd found. And each day they continued to discover high-severity and critical bugs in the systems Mythos probed, the kind of flaws normally uncovered by the world's best hackers. Meanwhile, Anthropic's Frontier Red Team -- a group of 15 "Ants," or Anthropic employees -- was experimenting in much the same way. The lab aims to ensure that Anthropic's models can't be used to harm humanity. They'll ship in robotic dogs and place them in a warehouse with engineers to test whether Claude could be used to control them maliciously. Or consult with biologists to understand whether the chatbot could be used to create biological weapons. Now, they were realizing that the biggest risk Mythos posed was to cybersecurity. "Within hours of getting the model, we knew it was different," says Logan Graham, who runs Anthropic's Frontier Red Team. A previous model, Opus 4.6, had shown indications it could help people exploit vulnerabilities in software. Mythos could exploit the vulnerabilities on its own, Graham says. This was a national security risk, he warned Anthropic's executives. That left Graham with the unenviable task of telling his bosses that their next major revenue generator was too hazardous to release to the public. Anthropic's co-founder and chief science officer, Jared Kaplan, said he had been monitoring Mythos' training "very carefully," as it was being built. By January he was starting to realize how capable Mythos was at finding vulnerabilities. Kaplan, a theoretical physicist, needed to consider whether these flaws were curiosities or "something very relevant to the infrastructure of the internet." He concluded it was the latter. Over the course of a week or two in late February and early March, he and co-founder Sam McCandlish weighed whether they could release the model. Around the first week of March, the executive team -- including Chief Executive Officer Dario Amodei, President Daniela Amodei, Chief Information Security Officer Vitaly Gudanets and others -- huddled to hear Kaplan and McCandlish's pitch. Mythos, they said, was too much of a risk to release generally. But Anthropic should let other companies, maybe even competitors, try it out. "It quickly became clear that we wanted to do something fairly unusual, that this wasn't going to be the same as the last launch," Kaplan said. By the first week of March, the company had agreed and greenlit its use as a cyber defense tool. The response was immediate. On the same day Anthropic publicly disclosed Mythos' existence, US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened Wall Street leaders for a meeting in Washington. The message: Use Mythos to find your weaknesses -- now. The executives who attended refused to share what was discussed even to some of their top advisers, showing the gravity of the meeting, according to people close to them who asked not to be named describing private conversations. The urgent warnings from White House officials about Mythos' potency as a hacking tool -- and their advice to use it defensively -- point to the way that AI has become a decisive force in cybersecurity. Anthropic released Mythos to a limited group of organizations as part of "Project Glasswing," enabling the likes of Amazon Web Services Inc., Apple Inc., and JPMorgan Chase & Co. to experiment with it. Government agencies have also expressed interest. Prior to external release, Anthropic briefed senior officials across the US government on Mythos Preview's full capabilities, including both its offensive and defensive cyber applications. The company is having ongoing discussions with international governments too, an Anthropic official who asked not to be named discussing internal matters said. Competitor OpenAI also pounced on the attention, saying Tuesday that it would release a tool intended to spot software flaws, called GPT-5.4-Cyber. Anthropic hasn't publicly released Mythos as a cybersecurity tool, and many outside researchers haven't had a chance to validate the company's claims. But Anthropic's unprecedented decision to gate access reflects a growing view inside the industry and government that AI is changing cybersecurity economics by reducing the cost of finding vulnerabilities, compressing the time needed to investigate targets and lowering the skill barrier for certain types of attacks. Anthropic warns that Mythos's ability to act with greater autonomy comes with risk. In testing an earlier version of the model, they found dozens of examples of "concerning" behavior, including not following human direction and even, in rare cases, covering its tracks when violating human instructions. In one incident, the model developed a multi-step exploit to escape the limited environment it was inside to gain broad access to the internet and begin to publish material online, all on its own initiative. The software that now underpins everything from banking apps to hospital systems is laced with obscure coding flaws that trained specialists spend weeks or months trying to identify. Occasionally hackers get there first, resulting in data breaches and ransomware attacks that can have devastating consequences. High-profile names have been quick to question just how powerful Mythos really is, or how much of a risk it would pose if released. "A growing number of people are wondering if Anthropic is the AI industry's 'boy who cried wolf,'" White House AI advisor David Sacks wrote on the social media site X. "If Mythos-related threats don't materialize, the company will have a serious credibility problem." But hackers have already adopted large language models to launch complex malicious campaigns. A Chinese cyber-espionage group already used Anthropic's Claude to try breaching roughly 30 targets, while other attackers have used AI to steal data from government agencies, deploy ransomware and quickly break into hundreds of firewall tools meant to safeguard data. Among US government officials focused on national defense, the introduction of Mythos has created profound uncertainty about how to evaluate cybersecurity risk, according to a person familiar with the matter. Equipping an individual hacker with the model, or similar AI tools, would likely be a transformation equivalent to turning a conventional soldier into a special forces operator, the person said. At the same time, Mythos appears likely to be a force multiplier, the person said: Enabling a criminal hacking gang to operate at the level of a small nation state and for a small country's intelligence and military hackers to carry out breaches of the sort now done by China. "I really believe we will be safer and better, and we will be much more secure with AI," said Rob Joyce, former director of cybersecurity at the National Security Agency. "But I think there's this dark period between now and some time in the future where the advantage is very much offensive AI, where the people who haven't done the basics will get hacked." Mythos isn't the only model doing this kind of work. Numerous organizations have been using LLMs to find vulnerabilities, including previous Claude models and Google's Big Sleep. JPMorgan had successfully been using large language models before the Mythos announcement to help find vulnerabilities in the bank's software, according to a person familiar with the matter who requested anonymity to discuss confidential internal security projects. Efforts that had previously taken days or weeks to identify "zero-day" flaws and write code to exploit them now can take as little as an hour or even minutes, the person said. Zero-days are so-called because they're unknown to defenders, who thus have zero days to fix them. JPMorgan's focus has been primarily on supply chain and open-source software and has found flaws and subsequently alerted vendors, the person said. CEO Jamie Dimon said during an earnings call that Mythos "shows a lot more vulnerabilities need to be fixed." The bank had already been in talks with Anthropic to test the model before the public became aware of it, according to a person familiar with the matter who wasn't authorized to discuss the matter publicly. JPMorgan declined to comment. Other Wall Street banks and technology companies are now experimenting with Mythos to help plug holes before hackers can find a way in. Goldman Sachs Group Inc., Citigroup Inc., Bank of America Corp. and Morgan Stanley are among the financial institutions testing the technology internally, Bloomberg News has reported. Cisco Systems Inc. staffers are especially wary of whether intruders will use AI to try to find pathways into software that runs its networking equipment worldwide, such as routers, firewalls and modems, said Anthony Grieco, Cisco's chief security and trust officer. Grieco is particularly worried about how AI might accelerate hackers targeting devices that are end-of-life, and therefore won't be updated by Cisco going forward, Grieco said. Plugging the holes that AI tools are finding will remain problematic. That process, known as security patching, is such a costly, slow exercise for organizations that many choose not to squash their bugs at all. Devastating attacks like the one at Equifax Inc., where intruders stole records of about 147 million people, were possible because organizations didn't apply known fixes. Anthropic is in discussions with federal agencies, even after the Trump administration classified the AI firm as a supply-chain threat following its refusal to help facilitate mass surveillance of Americans. The Treasury Department was seeking to gain access to Mythos this week, and Secretary Bessent said the model would help the US maintain an AI edge over China. In one instance, the model wrote a web browser exploit that chained together four vulnerabilities, a feat that'd be a major challenge for human hackers. Such vulnerability chains lead into otherwise highly secure systems, like in the Stuxnet hack that damaged centrifuges at an Iranian nuclear facility, according to cybersecurity research reports on the matter. Mythos also was able to identify and exploit zero-day vulnerabilities in every single major web browser when directed to do so, according to Anthropic. Anthropic said it used Mythos to find exploits in Linux code, which is "underpinning most modern computing," according to Jim Zemlin, executive director of the Linux Foundation. That includes everything from Android smartphones and internet routers to NASA supercomputers. Mythos autonomously found several flaws in the open-source code that would allow an attacker to take complete control of a machine. Now, dozens of people at the Linux Foundation are experimenting with Mythos. For Zemlin, one question is whether the Anthropic model will yield the kinds of insights that would help developers write better software, so there are fewer vulnerabilities in the first place. "We're great at finding bugs," he said. "We're terrible at fixing them."
[3]
Bessent, Powell warn bank CEOs about Anthropic model risks, Bloomberg News reports
April 9 (Reuters) - U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an urgent meeting with bank executives on Tuesday to warn ā them about the cyber risks raised by Anthropic's latest AI model, Bloomberg News reported, citing sources. The meeting at the Treasury Department in Washington aimed to ensure banks are ā aware of potential risks posed by Anthropic's Mythos and similar models, and are taking steps to ā defend their systems, the report said on Thursday. Reuters could ā not immediately verify the report. Reporting ā by Carlos MĆ©ndez in Mexico City; Editing by Sumana Nandy Our Standards: The Thomson Reuters Trust Principles., opens new tab
[4]
Scott Bessent called in US bank CEOs to discuss Anthropic model's cyber risks
US Treasury secretary Scott Bessent summoned the leaders of some of the largest US banks earlier this week to discuss the cyber risk posed by the latest AI model from Anthropic, according to people familiar with the matter. The meeting was attended by the leaders of Bank of America, Citigroup, Goldman Sachs, Morgan Stanley and Wells Fargo, the people said, who were already in Washington for a meeting of the banking lobby group. JPMorgan Chase chief executive Jamie Dimon was invited to the conversation with Bessent but could not attend. The meeting, attended by Federal Reserve chair Jay Powell, was first reported by Bloomberg. The summons from the Treasury secretary underscores the concerns inside the Trump administration over the capabilities of Anthropic's latest model because of its advanced ability to detect cyber security vulnerabilities that could be exploited by bad actors. Executives have been warning for years of the cyber risks facing the financial system. In his annual letter published this week, Dimon wrote that it "remains one of our biggest risks" and that "AI will almost surely make this risk worse" and would require significant investment for defence. Anthropic on Tuesday released the model, dubbed Claude Mythos Preview, to a select group of partners, including Amazon, Apple and Microsoft, to give them a "head start on being able to secure vulnerabilities". Mythos, which is a "general purpose" model with capabilities beyond cyber security, marked the first time Anthropic had limited the launch of a new model. "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," Anthropic said in a statement announcing the release. It added that Mythos had already found thousands of severe vulnerabilities, including in "every major operating system and web browser", some of which had been undetected for decades. Anthropic said it has held talks with US government officials about the model's "offensive and defensive cyber capabilities". The limited release of Mythos came after two incidents where data from the start-up leaked online -- including documents related to Mythos and the underlying code for its Claude assistant -- raising concerns about Anthropic's security. The company blamed human error for the incidents. Anthropic declined to comment on the bank CEOs meeting. The Federal Reserve, JPMorgan, Goldman Sachs, Bank of America and Wells Fargo declined to comment. The Treasury, Citibank and Morgan Stanley did not respond.
[5]
Jamie Dimon says Anthropic's Mythos reveals 'a lot more vulnerabilities' for cyberattacks
Jamie Dimon, chief executive officer of JPMorgan Chase & Co., right, departs the US Capitol in Washington, DC, US, on Wednesday, Feb. 25, 2026. JPMorgan Chase CEO Jamie Dimon said Tuesday that while artificial intelligence tools could eventually help companies defend themselves from cyberattacks, they are first making them more vulnerable. Dimon said that JPMorgan was testing Anthropic's latest model -- the Mythos preview announced by the AI firm last week -- as part of its broader effort to reap the benefits of AI while protecting against bad actors wielding the same technology. "AI's made it worse, it's made it harder," Dimon told analysts on the bank's earnings call Tuesday morning. "It does create additional vulnerabilities, and maybe down the road, better ways to strengthen yourself too." When asked by a reporter about Mythos, Dimon seemed to refer to Anthropic's warning that the model had already found thousands of vulnerabilities in corporate software. "I think you read exactly what is it," Dimon said. "It shows a lot more vulnerabilities need to be fixed." The remarks reveal how artificial intelligence, a technology welcomed by corporations as a productivity boon, has also morphed into a serious threat by giving bad actors new ways to hack into technology systems. Last week, Treasury Secretary Scott Bessent summoned bank CEOs to a meeting to discuss the risks posed by Mythos. JPMorgan, the world's largest bank by market cap, has for years invested heavily to stay ahead of threats, with dedicated teams and constant coordination with government agencies, Dimon said. "We spend a lot of money. We've got top experts. We're in constant contact with the government," he said. "It's a full-time job, and we're doing it all the time."
[6]
Banks Warned About Anthropic's New, Powerful A.I. Technology
In an unusual move, the Treasury secretary and the Federal Reserve chair gathered bank executives to caution about cyberthreats posed by artificial intelligence. The leaders of some of America's largest banks were warned by a top government official this week about a new artificial intelligence model from Anthropic that could lead to heightened risks of cyberattacks, according to three people briefed on the matter but not permitted to speak publicly. The stark message was delivered on Tuesday morning by Treasury Secretary Scott Bessent to a small group of chief executives, including those from Bank of America, Citi and Wells Fargo, in a hastily arranged meeting in Washington, D.C. Mr. Bessent, the people said, cautioned the banks that allowing the new A.I. software to run through their internal computer systems could pose a serious risk to sensitive customer data. The Federal Reserve chair, Jerome H. Powell, who has spoken publicly in recent weeks about the threat of cyberattacks against the financial system, also attended Tuesday's meeting with the bank leaders. The warnings relate to a new intelligence model that Anthropic named Claude Mythos Preview. Anthropic has said the model is particularly good at identifying security vulnerabilities in software that human developers could not find. At Tuesday's meeting, the people briefed on the matter said, the bank executives were told that the new model might be so effective at finding security weaknesses inside banks that hackers or other so-called third-party bad actors could get their hands on the information and exploit it. Anthropic itself has warned about the risks. The company said this week that the model's advancements were so powerful and potentially dangerous that they could not safely be released to the public yet and would instead be contained to a coalition of 40 companies that it called "Project Glasswing." That group includes at least one bank, JPMorgan, the nation's largest, which earlier said it would use the software "to evaluate next-generation A.I. tools for defensive cybersecurity across critical infrastructure." The Trump administration and Anthropic are locked in a legal battle over the Defense Department's recent designation of the company as a "supply chain risk." The government issued that designation after Anthropic insisted on putting limits on the use of its A.I. technology in war. In a statement, a Treasury spokesperson said, "This week's meeting was convened by Secretary Bessent to initiate a process for planning and coordination of our approach to the rapid developments taking place in A.I." The existence of the meeting was reported earlier by Bloomberg News. The Fed declined to comment. "We're taking every step we can to make sure that everybody is safe from these potential risks, including Anthropic agreeing to hold back the public release of the model until our officials have figured everything out," Kevin A. Hassett, director of the National Economic Council, told Fox News on Friday. "There's definitely a sense of urgency." Logan Graham, an Anthropic executive, said in a statement that the new technology would help "secure infrastructure that is critical for global security and economic stability."
[7]
Anthropic's Mythos Is a Wake-up Call For Everyone, Not Just Banks
Mythos, a new artificial intelligence model that Anthropic PBC has teased as too dangerous to release, looked at first like a problem for banks. Days after the company announced the new technology, US Treasury Secretary Scott Bessent summoned Wall Street leaders to make sure they were taking precautions to defend their systems, creating invaluable publicity for Anthropic and raising questions about who gets an exclusive peek at its threatening progeny. The Treasury is now pushing for access to Mythos. One organization that already has it is the UK's AI Security Institute, which has become the world's top neutral arbiter of what counts as safe and secure AI. It found that some of the hype around Mythos is warranted. It is indeed more capable of being used for complex cyberattacks than other AI tools such as OpenAI's ChatGPT or Google's Gemini. But it is most perilous for "weakly defended" or simplified systems. Large banks have some of the most secure IT in the world, and while Mythos and other powerful AI poses a threat in the wrong hands, it's the much broader array of small and medium-sized companies that look most vulnerable to hackers and bad actors using the tools. Cyber specialists have long complained that companies treat security as an afterthought, and the result is online services and software that are riddled with bugs, handing hackers a possible way to infiltrate a computer system. Tech companies have an approach for dealing with this, called "responsible disclosure." Once a flaw is found in their software, they'll announce it to the world with a suggested fix, giving their customers time to make the patch and move on with their lives. Microsoft Corp.'s version of this is Patch Tuesday, which despite its name refers to a monthly disclosure of flaws the company has found in Office 365, Windows and other products. IT staff at banks like Barclays Plc and Wells Fargo & Co. will take those suggested patches, test them to make sure they don't break any of their existing systems, get sign off from management, and then deploy them. That takes weeks or months. Up until the advent of generative AI, the process worked just fine because it would typically take an even longer time for bad actors to find a way to attack a system based on the flaw that had been disclosed. They'd have to study the bug and also experiment with different methods for exploiting it. Artificial intelligence tools have changed all that. Even two years ago, hackers could take the details of a disclosure and paste it into ChatGPT, then tell the bot to scan a public database of source code such as GitHub for other patterns which could then be exploited. Let's say for instance that Microsoft announced that its researchers had found a flaw in how Office 365 handles a file. A chatbot could not only suggest how to exploit it but quickly find other software like Microsoft Outlook or Teams that have similar deficiencies. This has all got even easier for hackers in the last few months, as AI companies have imbued their models with "agentic" capabilities, effectively giving them the power to act independently. Anthropic's Claude Cowork, released in January, can now carry out tasks like sending emails and making calendar appointments. For those who want to break into software, such tools won't just find weaknesses, they'll try different ways to hammer at them automatically until one method works. Mythos can even "chain" a software bug into multi-step attacks, something only highly-skilled human hackers had been able to do previously. It's the equivalent of a burglar planning a series of steps for a break-in: finding that first open window, using it to unlock a door from the inside and then disabling the alarm. Each step alone isn't enough but together they get full access. Till now, generative AI's impact on cyber security has been amorphous. There was no single tool that could launch devastating new attacks, but large language models were still harnessed to supercharge old tricks of the trade. Hackers have used chatbots to polish up emails for phishing attacks to look more credible, or real-time avatar generators to create deepfake video calls that trick people into thinking a man in his living room is a young woman. Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Plus Signed UpPlus Sign UpPlus Sign Up By continuing, I agree to the Privacy Policy and Terms of Service. But agentic AI is set to fuel the act of hacking itself, which has long been an opportunistic pursuit for the unscrupulous. So-called black hats tend not to go after banks because they're so secure. Instead they scan the web to spot vulnerabilities, be it a hospital they can infiltrate to make ransomware demands or a mom-and-pop shop. The recent advances in AI are a problem for these organizations because the moment a flaw is disclosed by a software provider, they now have precious little time to update and patch their systems. According to zerodayclock.com, the average time between a software flaw being made public and a working attack being built has collapsed from 771 days in 2018, to less than four hours today. Anthropic's disclosure of Mythos certainly benefits its own publicity efforts ahead of an initial public offering, adding to the mystique around the potency of its technology. But it's also forcing a much-needed reckoning over how the window of time between published IT flaws and their exploitation has effectively vanished. That raises questions over whether "responsible disclosure" is such a smart idea in the first place, and whether the process of patching flaws over weeks and months is now fruitless. Even Wall Street can't answer these questions yet, but banks at least have the staffing and the money to work out the difficult structural changes needed to eventually do patches in near-real time. The bigger problem will be for smaller firms who need to move just as fast, and who will require technical and regulatory help that the market can't yet provide.
[8]
Trump administration blacklisted Anthropic - now tells banks to use its AI
In short: Treasury Secretary Scott Bessent and Fed Chair Jerome Powell are urging Wall Street's biggest banks to test Anthropic's Mythos AI model for cybersecurity vulnerabilities, even as the Pentagon fights Anthropic in court after branding it a supply chain risk for refusing to remove safety guardrails on autonomous weapons and mass surveillance. JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are all reportedly testing the model. Mythos, which found thousands of zero-day flaws across major operating systems and browsers, is being distributed through a restricted programme called Project Glasswing to roughly 50 organisations. UK regulators are also scrambling to assess the risks. The Trump administration is quietly encouraging America's largest banks to test the same AI company's technology it has spent two months trying to destroy. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned executives from JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley this week and urged them to use Anthropic's new Mythos model to detect cybersecurity vulnerabilities in their systems, according to Bloomberg. The recommendation is remarkable for its contradiction. Anthropic is currently fighting the Department of Defense in federal court after Defense Secretary Pete Hegseth designated the company a "supply chain risk", a label that bars it from military contracts and directs defence contractors to stop using its technology. The designation came after Anthropic refused to remove two safety restrictions from its AI models: no use in fully autonomous weapons, and no deployment for mass surveillance of American citizens. Now, two of the administration's most senior economic officials are telling Wall Street to adopt the very product the Pentagon has tried to blacklist. Claude Mythos Preview is a frontier model that Anthropic did not explicitly train for cybersecurity. The vulnerability-finding capability emerged as what the company describes as a downstream consequence of general improvements in code reasoning and autonomous operation. During testing, Mythos identified thousands of zero-day vulnerabilities, flaws previously unknown to software developers, across every major operating system and web browser. The capabilities were significant enough that Anthropic chose not to release the model publicly. Instead, it launched Project Glasswing, a controlled programme giving access to roughly 50 organisations including Amazon Web Services, Apple, Google, Microsoft, Nvidia, Cisco, CrowdStrike, Palo Alto Networks, and JPMorgan Chase. Anthropic has committed up to $100 million in usage credits and $4 million in direct donations to open-source security organisations as part of the initiative. The framing, a model "too dangerous to release", has drawn scepticism. Tom's Hardware noted that claims of "thousands" of severe zero-day discoveries relied on just 198 manual reviews, and that many of the flagged vulnerabilities were in older software or were impractical to exploit. Others in the security community suggested the restricted release looked less like responsible AI governance and more like a smart enterprise sales strategy: create scarcity, generate fear, and let the customers come to you. The collision between the Bessent-Powell recommendation and the Hegseth designation is not a matter of mixed signals, it is two branches of the same administration pursuing openly contradictory policies toward the same company. The Pentagon dispute began in February, when Hegseth gave Anthropic CEO Dario Amodei a Friday deadline to drop the company's safety restrictions or lose its $200 million defence contract. Amodei refused. Hegseth responded by declaring Anthropic a supply chain risk and President Trump ordered federal agencies to stop using its technology. A Pentagon official accused Amodei of having a "God complex." Trump called Anthropic a "radical left, woke company." The courts have since split. A federal judge in California issued a preliminary injunction blocking the supply chain designation, writing that "nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government." An appeals court in Washington, D.C., denied Anthropic's request to temporarily halt the blacklisting while the case proceeds. The net effect: Anthropic is excluded from DoD contracts but can continue working with other government agencies. It is into that gap, excluded from the Pentagon but not from the Treasury or the Fed, that Bessent and Powell stepped this week. JPMorgan Chase was the only bank listed as an official Project Glasswing partner, but Bloomberg reported that Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are all testing Mythos internally. The use cases reportedly include vulnerability detection, fraud-risk flagging, and compliance workflow automation across financial systems. The speed of adoption reflects a genuine fear. If Mythos can find zero-day vulnerabilities in operating systems and browsers, it can presumably find them in banking infrastructure too, and so can any sufficiently capable model that follows. The defensive logic is straightforward: better to find the holes before an adversary's AI does. The regulatory response has been international. The Financial Times reported that UK officials at the Bank of England, the Financial Conduct Authority, and HM Treasury are in discussions with the National Cyber Security Centre to examine potential vulnerabilities highlighted by Mythos. Representatives from major British banks, insurers, and exchanges are expected to be briefed within the fortnight. The Mythos episode exposes a structural problem in the administration's approach to AI. The same government that branded Anthropic a national security threat because it refused to remove safety guardrails is now urging the financial system to depend on Anthropic's technology for its own security. The message to Anthropic is incoherent: you are too dangerous to trust with defence contracts, but indispensable enough that the Treasury Secretary personally phones bank CEOs to recommend your product. For Anthropic, the contradiction is strategically useful. Every bank that adopts Mythos deepens the company's integration into critical national infrastructure, making the supply chain designation look increasingly absurd. For the administration, the episode reveals what happens when national security policy is driven by personal grievance rather than coherent strategy: the left hand blacklists what the right hand is busy deploying. The banks, for their part, appear untroubled by the contradiction. When the Treasury Secretary and the Fed Chair tell you to test something, you test it, regardless of what the Pentagon thinks about the company that made it.
[9]
Powell, Bessent discussed Anthropic's Mythos AI cyber threat with major U.S. banks
Federal Reserve Chairman Jerome Powell and Treasury Secretary Scott Bessent met with major U.S. bank CEOs this week to discuss the possible cyber risks raised by Anthropic's Mythos model, CNBC confirmed Friday. The bank heads were already in Washington, D.C., for another meeting, but a special gathering was called to discuss Mythos, according to a person familiar with the matter, who asked not to be named in order to share information about a confidential matter. Earlier this week, Anthropic rolled out the new artificial intelligence model in a limited capacity over concerns that hackers could exploit its capabilities. Anthropic did not immediately respond to CNBC's request for comment.
[10]
Anthropic Wins Accolades From Canada's AI Minister Over Mythos Approach
A Canadian cabinet minister praised Anthropic PBC's decision to introduce its Mythos model to select companies and allow them to test the technology before releasing it more widely. "Working with defenders first, rather than releasing this new model broadly, is the responsible path and gives people protecting critical systems a head start," Evan Solomon, Canada's minister responsible for artificial intelligence, said Tuesday after meeting with officials from the AI company. Anthropic has warned that Mythos is powerful enough that it may be capable of cyberattacks if companies don't try it against their own systems and build defenses ahead of any wider release. The San Francisco-based company has limited access to a small number of firms initially, including JPMorgan Chase & Co., Amazon.com Inc. and Apple Inc. They're all part of "Project Glasswing," which will work to secure the most important systems before similar AI models become available. US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned Wall Street leaders last week to a meeting on the related cyber risks. Treasury's technology team is now seeking to gain access to the Anthropic model so it can begin looking for vulnerabilities, a person familiar with the matter told Bloomberg News. US Treasury Seeking Access to Anthropic's Mythos to Find Flaws OpenAI Releases Cyber Model to Limited Group in Race With Mythos Lagarde, Worried About AI, Lauds Anthropic's Approach on Mythos Odd Lots Newsletter: You Don't Hear Much About the AI Overbuild Officials in the Canadian financial sector and government are also in active talks about the potential threats. Last week, members of a government-industry committee known as the Canadian Financial Sector Resiliency Group met to discuss Mythos. The group includes representatives from the Bank of Canada, regulators, banks and other financial firms. Desjardins Group, a large financial co-operative based in the province of Quebec, said it's "actively preparing" for the launch of Mythos. "We are working closely with various working groups, both in Canada and internationally, to anticipate challenges and ensure that our organization is ready to adopt this technology in a responsible and secure manner," a spokesperson said in an emailed statement. Desjardins was hit by a huge leak in 2019 of the personal information of millions of Canadian customers, though it turned out to be the work of a rogue employee. Solomon's brief comments made no mention of the specific vulnerabilities posed by the technology, nor of any efforts by the Canadian government to get access to Anthropic's model for testing. Instead, he lauded the company's preparation. "This is the kind of proactive approach we expect from frontier AI companies: identify risks early, engage governments and the security community, and put safeguards in place before capabilities are widely available," he said. On Tuesday, Tobias Adrian, director of the monetary and capital markets department at the International Monetary Fund, warned that governments and regulators must "stay at the frontier" of rising threats from artificial intelligence, adding that it's important for global financial stability that security threats are quickly addressed.
[11]
Scoop: BNY gets access to OpenAI, Anthropic's advanced cyber models
Why it matters: Wall Street is working overtime to win the AI security race. What they're saying: Anthropic and OpenAI recognize the importance of releasing its cyber-capable models to certain institutions early, Vince tells Axios. It's key to protecting critical infrastructure, "and in our case, obviously the financial services world," Vince says. * The AI labs also want feedback and realworld testing, Vince says. * Other firms with access to these previews will be able to share lessons learned with one another as well as the labs themselves, Vince said. Catch up quick: The access comes after Treasury Secretary Scott Bessent and Fed Chair Jerome Powell called a meeting with the biggest names on Wall Street to discuss Mythos, first reported by Bloomberg and confirmed by Axios. * The meeting focused on risks of AI-powered attacks on bank systems as well as preventative measures. Zoom in: OpenAI's new model variant, GPT-5.4-Cyber, will be rolled out to a broader set of organizations than Anthropic's Mythos, which initially reached about 40 enterprises. * While Anthropic signaled that its model was too dangerous to release broadly, OpenAI is making tools more widely available for defensive cyber work while still preventing nefarious actors from accessing them, Axios' Sam Sabin writes. Follow the money: BNY is all-in on AI. * The bank , which plans to announce its earnings later this morning, has over 100 digital employees that have their own tasks, managers and email addresses. * Under Vince's leadership, BNY rose to the best performing stock in an index tracking a group of major banks, up 218%. What we're watching: How banks maintain their long-held status as titans of cybersecurity defense in an AI-powered world.
[12]
Goldman Sachs chief 'hyper-aware' of risks from Anthropic's Mythos AI
US bank has the Claude model and is working closely with the tech firm to improve cyber protection Goldman Sachs's chief executive, David Solomon, has said he is "hyper-aware" of the capabilities of Anthropic's Mythos AI model and is working "closely" with the tech firm after it issued warnings about the cybersecurity risk it poses. The US bank had been monitoring the rapid advances in artificial intelligence, including large language models (LLMs), as part of wider efforts to protect itself from hackers. "Obviously the LLMs are making rapid progress and we're hyper-aware of the enhanced capabilities of these new models with the help of the US government and the model publishers," Solomon told analysts on an earnings call on Monday. That included Anthropic, the company behind the Claude family of AI tools. Last week it claimed that its latest model, Mythos, posed an unprecedented risk because of its ability to expose flaws in IT systems. "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," Anthropic said in a blogpost last Wednesday. "The fallout - for economies, public safety, and national security - could be severe." Solomon said on Monday: "We're aware of Mythos and its capabilities ... We have the model. We're working closely with Anthropic and all of our security vendors to kind of harness frontier capabilities wherever it's possible. And this will continue to be an important focus. "We are very focused on supplementing our cyber and infrastructure resilience. And this is part of our ongoing capabilities that we have been investing in, and are accelerating our investment in." The news comes after the US Treasury secretary, Scott Bessent, summoned Solomon and other big American bankers to Washington to discuss the Mythos model last week. That meeting focused on heads of so-called systemically important banks - where regulators believe that a major disruption to their operations, or their potential collapse, would put financial stability at risk. On Monday the UK government's AI Security Institute (AISI) warned that Mythos was a "step up" over previous models in terms of the cyber threat it posed. AISI said Mythos could carry out attacks that required multiple actions and discover weaknesses in IT systems without human intervention. It said these tasks would normally take human professionals days to carry them out. Mythos was the first AI model to successfully complete a 32-step simulation of a cyber-attack created by AISI, solving the challenge in 3 out of its 10 attempts. AISI said Mythos appears to be capable of autonomously attacking small, weakly defended IT systems but it could not say for sure whether it could attack well-defended systems because its tests lack security features, such as defensive tools. The AISI blogpost ended with a warning that future advanced AI models will only improve on Mythos, so "investment now in cyber defence is vital". UK regulators are due to raise the issue of Mythos's risks with British bank bosses and government officials in the coming weeks. The Cross Market Operational Resilience Group (CMorg), made up of chief executives as well as officials from the Treasury, Bank of England, Financial Conduct Authority and National Cyber Security Centre, are due to meet within the next fortnight. The Bank of England, which is handling communications regarding Cmorg, declined to comment.
[13]
The AI that found 27-year-old vulnerabilities no human ever caught before just forced an emergency meeting with every major Wall Street CEO | Fortune
Treasury Secretary Scott Bessent and Fed chair Jerome Powell reportedly convened Wall Street leaders on Tuesday in an emergency meeting on concerns about Anthropic's latest AI model, flagging concerns about a greater cybersecurity risk. Bessent and Powell assembled the group of high-powered execs at the Treasury's headquarters to ensure banks were aware of the cyber risks presented by Anthropic's new model, Mythos, and similar future models, reported Bloomberg and Financial Times. Sources who spoke to Bloomberg said those in attendance included Citigroup CEO Jane Fraser, Morgan Stanley CEO Ted Pick, Bank of America CEO Brian Moynihan, Wells Fargo CEO Charlie Scharf, and Goldman Sachs CEO David Solomon. JPMorgan CEO Jamie Dimon was also invited, but was unable to attend, the sources said. The Federal Reserve declined to comment to Fortune. The Treasury didn't immediately respond to Fortune's requests for comment. The meeting comes just weeks after Fortune exclusively reported Anthropic was developing an unreleased model described by the company as "by far the most powerful AI model" it had ever developed, which the AI company inadvertently made public last month through its content management system. Later, the company acknowledged that model was Claude Mythos. Anthropic on Tuesday released a report entitled "Assessing Claude Mythos Preview's cybersecurity capabilities," citing the powerful capabilities of its new model. One capability included Mythos Preview, which was able to find many 10- and 20-year-old vulnerabilities, including a 27-year-old vulnerability in OpenBSD, which is an operating system that has a reputation of being one of the most secure. Anthropic briefed senior U.S. government officials and industry stakeholders on Mythos Preview's capabilities ahead of its release, someone with knowledge on the matter told Fortune that. In a blog post, the company said it is willing to work with officials at all levels of government to ensure national security is a priority when rolling out new AI models, and that the U.S. maintains a lead in AI technologies. Anthropic told Fortune that partnering with the government was the company's plan from the start (the company is currently in a legal battle with the Pentagon after the Defense Department blacklisted it for placing restrictions on use of its AI technologies). In partnership with Amazon Web Services, Amazon, Google, JPMorgan Chase, among other key tech companies, Anthropic launched Project Glasswing this week, an initiative aimed to secure critical software amid AI advancements. As part of the partnership, Anthropic said in a blog post it would share what they learn with the entire industry. Aside from tech and financial services partners, the company has extended access to 40 additional organizations who build or deploy critical software infrastructure, and committed up to $100 million in Mythos Preview usage credits, the most basic AI usage unit. Anthropic noted the project was launched as a reaction to the capabilities the company has observed in its new frontier model Mythos that it believes could "reshape cybersecurity." "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely," the post warned. "The fallout -- for economies, public safety, and national security -- could be severe." Anthropic said in the post Tuesday it would release the new model initially to "a limited group of industry partners to "enable defenders to begin securing the most important systems before models with similar capabilities become broadly available." Other business leaders have sounded the alarm on the cybersecurity risks powerful AI models pose. Dimon outlined in his annual letter to shareholders the cybersecurity risks major industries and corporations face from AI, saying cybersecurity "remains one of our biggest risks." He added, "AI will almost surely make this risk worse". As part of Project Glasswing, JPMorgan aims to reduce cyber risks stemming from AI's fast-evolving capabilities. "Promoting the cybersecurity and resiliency of the financial system is central to JPMorganChase's mission, and we believe the industry is strongest when leading institutions work together on shared challenges," Pat Opet, JPMorgan chief information security officer, said in a statement. "Project Glasswing provides a unique, early stage opportunity to evaluate next-generation AI tools for defensive cybersecurity across critical infrastructure both on our own terms and alongside respected technology leaders," he added.
[14]
Powell, Bessent Warn Banks About Security Risks From Anthropic's Mythos AI: Bloomberg - Decrypt
Anthropic has limited access to the model while it evaluates potential security risks. U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell reportedly convened a meeting with Wall Street bank CEOs earlier this week to warn about cybersecurity risks tied to a new artificial intelligence model from Anthropic. According to a report by Bloomberg, the meeting included executives from Citigroup, Bank of America, Wells Fargo, Morgan Stanley, and Goldman Sachs. Officials discussed Anthropic's new AI model Mythos, which has recently drawn broad concern over its apparent advanced cybersecurity capabilities. Officials convened the meeting to ensure banks understand the risks posed by systems capable of identifying and exploiting software vulnerabilities across operating systems and web browsers, and to encourage institutions to strengthen defenses against potential AI-assisted cyberattacks targeting financial infrastructure. Security researchers have warned that tools capable of automatically discovering vulnerabilities could accelerate both defensive security work and malicious hacking if misused. Anthropic's Mythos model first surfaced online in March after draft materials about the system leaked online, revealing what the company described as its most capable AI model yet. In testing, the system reportedly found thousands of previously unknown software vulnerabilities, including zero-day flaws across major operating systems and web browsers. Anthropic researchers said in a report earlier this week that Mythos Preview's vulnerability-discovery capabilities were not intentionally trained, but instead emerged from broader improvements in the model's coding, reasoning, and autonomy. "The same improvements that make the model substantially more effective at patching vulnerabilities also make it substantially more effective at exploiting them," the firm wrote. Because of those capabilities, Anthropic has restricted access to a small group of cybersecurity organizations. "Given the strength of its capabilities, we're being deliberate about how we release it," Anthropic said in a statement. "As is standard practice across the industry, we're working with a small group of early access customers to test the model. We consider this model a step change and the most capable we've built to date." To address that risk, Anthropic is testing Mythos through Project Glasswing, a collaboration with major technology and cybersecurity companies that uses the model to identify and patch vulnerabilities in critical software before attackers can exploit them. "Project Glasswing is a starting point. No one organization can solve these cybersecurity problems alone," the company said in a statement. "Frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play."
[15]
Why Anthropic's Mythos model has Washington and Wall Street worked up
Anthropic's most powerful new AI model, Mythos, is too dangerous to release to the public, the company says, sparking urgent discussions with governments and financial regulators. Anthropic is in discussions with the US government over its new Mythos AI model, which the company has said is too powerful to release to the public as it "poses unprecedented cybersecurity risks". The banking industry is also sounding the alarm. "The government has to know about this stuff," Anthropic's co-founder said on Monday at the Semafor World Economy event in Washington. "Absolutely, we're talking to them about Mythos, and we'll talk to them about the next models as well," he added. Referencing the public dispute with the government, which led to the company being labelled a supply-chain risk last month, he said, "I don't want that to get in the way of the fact that we care deeply about national security." The supply-chain risk designation followed the collapse of negotiations over Anthropic's efforts to limit how the US defense department can use its AI models. The move comes after Treasury Secretary Scott Bessent convened a meeting of senior American bankers in Washington to discuss the Mythos model last week. The meeting encouraged the banking executive to use Antropic's Mythos model to detect vulnerabilities, according to Bloomberg. Anthropic also said it would limit the release of its new AI model to a few tech and cybersecurity firms. The list includes Amazon, Apple and JP Morgan Chase. Goldman Sachs, Citigroup, Bank of America and Morgan Stanley are reportedly testing the Anthropic model too, Bloomberg reported. On Monday, the United Kingdom's government AI Security Institute (AISI) issued a warning that Mythos was a "step up" over previous models in terms of the cyber threat it posed. Meanwhile, the Financial Times reported that the UK financial regulators are also discussing Mythos' potential risks.
[16]
IMF chief says she's concerned about cybersecurity risks posed by Anthropic's latest AI model
Washington -- The head of the International Monetary Fund said she is concerned about a powerful new AI model from Anthropic that poses major cybersecurity risks, warning "time is not our friend on this one." Kristalina Georgieva, the IMF's managing director, said in an interview set to air Sunday on "Face the Nation with Margaret Brennan" that the world does not have the ability "to protect the international monetary system against massive cyber risks." "The risks have been growing exponentially," Georgieva said. "Yes, we are concerned. We are very keen to see more attention to the guardrails that are necessary to protect financial stability in the world of AI." On Tuesday, Federal Reserve Chair Jerome Powell and Treasury Secretary Scott Bessent held an urgent meeting with Wall Street leaders to discuss the cybersecurity risks posed by Anthropic's Claude Mythos Preview, sources told CBS News. A Treasury Department spokesman said in a statement that "additional coordination meetings by Treasury are planned across a number of regulators and institutions on an ongoing basis to address these developments, as well as a host of other issues." In a blog post earlier this week, Anthropic said the model has demonstrated "a leap" in the ability to spot cybersecurity vulnerabilities -- some of them decades old -- and exploit them. Anthropic is only releasing the model to select partners so they can use it to harden their systems. "Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout -- for economies, public safety, and national security -- could be severe," the company said. Georgieva said key financial institutions, including central banks, need to "work together" and be "very attentive" in managing the risks of cyberattacks. "It is an issue that easily can present itself in other parts of the world, and that is why we need people to cooperate," she said.
[17]
US summoned bank bosses to discuss cyber risks posed by Anthropic's latest AI model
Reports say Fed chair Jerome Powell among attenders at meeting in Washington The US Treasury secretary, Scott Bessent, summoned major American bank chiefs to a meeting in Washington this week amid concerns over the cyber risks posed by Anthropic's latest AI model, according to reports. Bosses including the Federal Reserve chair, Jerome Powell, were said to have gathered at the Treasury headquarters for the meeting after the release of the Claude Mythos AI model that Anthropic says poses unprecedented cybersecurity risks. A recent leak of Claude's code prompted the startup to release a blogpost at the beginning of the month saying that AI models had surpassed "all but the most skilled humans at finding and exploiting software vulnerabilities", adding: "The fallout - for economies, public safety, and national security - could be severe." This week's meeting was reportedly called while bank bosses were already in Washington for a lobby group meeting, with a guest list focused on heads of systemically important banks - meaning regulators believe a major disruption to their operations, or their potential collapse, would put financial stability at risk. Attenders included the Goldman Sachs chief executive, David Solomon, Bank of America's Brian Moynihan, Citigroup's Jane Fraser, Morgan Stanley's Ted Pick and the Wells Fargo boss Charlie Scharf, according to Bloomberg, which first reported details of the meeting. JP Morgan's Jamie Dimon was invited but unable to attend. In an annual letter to shareholders, published earlier this week, Dimon warned that cybersecurity "remains one of our biggest risks" and that "AI will almost surely make this risk worse". Anthropic has said that its Mythos model, yet to be shared with the public, has exposed thousands of vulnerabilities in software and popular applications, prompting it to limit the release of the new model to a small clutch of businesses, including Amazon, Apple and Microsoft. It marks the first time Anthropic has restricted the release of any of its products. The networking companies Cisco and Broadcom have also gained access, along with the Linux Foundation, which promotes the free, open-source Linux computer operating system. It comes amid concerns that hackers could end up using such tools for figuring out passwords or cracking encryption that is intended to keep data safe. The company said the oldest of the vulnerabilities uncovered by Mythos were up to 27 years old, none of which are believed to have been noticed by their creators or tech monitors before being identified by the AI mode. The meeting comes weeks after the US government designated Anthropic as a supply chain risk, allegations that Anthropic is fighting in court. The Federal Reserve, Anthropic and the US banks declined requests for comment from Bloomberg. The Treasury did not respond to the news outlet's request for comment.
[18]
US Treasury Seeking Access to Anthropic's Mythos to Find Flaws
The US Treasury Department's technology team is seeking to gain access to Anthropic PBC's Mythos AI model so it can begin hunting for vulnerabilities, according to a person familiar with the situation. Treasury Chief Information Officer Sam Corcos was aiming to gain access to the model, which Anthropic has been releasing to a limited number of institutions, as soon as this week, the person said, asking not to be identified because the information isn't public. Corcos briefed the Treasury's cybersecurity team on the technology last week and directed it to prepare for the eventual threats from powerful AI systems, the person said. Treasury didn't respond to requests for comment. Anthropic declined to comment. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned Wall Street leaders last week to an urgent meeting on concerns that Mythos will usher in an era of greater cyber risk. Anthropic has warned that the model may be capable of powering cyberattacks if companies don't test it against their own systems and build defenses ahead of any wider release. At the meeting last week, Wall Street leaders were urged to take Mythos seriously and to use it to detect vulnerabilities. The Treasury Department is seeking access from Anthropic despite the Pentagon labeling the artificial intelligence company a US supply chain risk earlier this year. The government made the declaration after a dispute with Anthropic over how its AI technology may be used by the military, and set a six-month period for the company to hand over AI services to another provider. The startup is fighting the designation in federal court. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Plus Signed UpPlus Sign UpPlus Sign Up By continuing, I agree to the Privacy Policy and Terms of Service. Corcos -- co-founder of a health tech startup and a part of the Department of Government Efficiency, or DOGE, that Elon Musk led to orchestrate cuts at agencies -- was named Treasury's chief information officer in mid-2025. Corcos had encouraged the use of Anthropic's Claude AI tools within the Treasury's technology team before the company was labeled a supply chain risk, according to the person familiar with the situation. During testing of Mythos, Anthropic's security team found it was capable of identifying and then exploiting vulnerabilities "in every major operating system and every major web browser when directed by a user to do so." In one case, it wrote a web browser exploit that chained together four vulnerabilities. In a recent blog, Anthropic introduced an initiative called Project Glasswing through which a limited number of organizations are testing Mythos. Anthropic said in the post that it has been in "ongoing discussions" with government officials about the model and its capabilities. "We are ready to work with local, state, and federal representatives," the startup said. Wall Street banks themselves have begun testing Mythos internally. While JPMorgan Chase & Co. was the only bank named as part of an initiative to test the Mythos model, other major financial institutions including Goldman Sachs Group Inc., Citigroup Inc., Bank of America Corp. and Morgan Stanley have also gained access, people familiar with the matter told Bloomberg last week.
[19]
Bessent, Powell warn bank CEOs about Anthropic model cyber risks
US Treasury Secretary Scott Bessent and Federal Reserve chair Jerome Powell summoned Wall Street leaders to an urgent meeting on concerns that the latest artificial intelligence model from Anthropic, Mythos, will usher in an era of greater cyber risk. Bessent and Powell assembled the group at Treasury's headquarters in Washington on Tuesday (Wednesday AEST) to make sure banks are aware of possible future risks raised by Mythos and potential similar models, and are taking precautions to defend their systems, according to people familiar with the matter who asked not to be identified.
[20]
Fed Chair Jerome Powell, Treasury's Bessent and top bank CEOs met over Anthropic's Mythos model
Richard Escobedo covers economic policy at CBS News and is a coordinating producer at Face the Nation with Margaret Brennan. He joined CBS in 2018 and is a graduate of Texas Christian University in Fort Worth, Texas. Federal Reserve Chair Jerome Powell and Treasury Secretary Scott Bessent met with top bank CEOs in a closed-door meeting this week to discuss the cybersecurity risks posed by Anthropic's latest AI model, Mythos, sources told CBS News. JPMorgan Chase chief executive Jamie Dimon was invited but was unable to attend, according to the sources. The meeting was earlier reported by Bloomberg News, which said Powell and Bessent summoned the financial executives to the Treasury Department's Washington, D.C., headquarters to discuss the potential risks from Mythos and other AI models. Anthropic, the developer of generative AI chatbot Claude, on Tuesday said it was forming a project with several major tech companies, including Amazon, Apple and Nvidia, to use Mythos to strengthen cybersecurity defenses. Anthropic added that it will not widely release Mythos because of its advanced capabilities, which have uncovered vulnerabilities in major operating systems and web browsers. "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely," Anthropic said in a post about the new project. "The fallout -- for economies, public safety, and national security -- could be severe." The effort, called Project Glasswing, "is an urgent attempt to put these capabilities to work for defensive purposes," Anthropic said on Tuesday. In 2023, the Biden Administration identified AI as a potential risk to financial stability. It was the first time that designation had been made, CBS News reported at the time.
[21]
Anthropic's Mythos puts DC, Wall Street on high alert
The limited release of Anthropic's new Mythos model is putting Washington officials on high alert after the AI firm's warning about the model's security risks sent shockwaves through and sparked debate in the tech industry. Within days of being informed of Anthropic's new technology, the White House ratcheted up a multipronged response involving Trump administration leaders across agencies to evaluate just how powerful AI is becoming. Anthropic's announcement follows years of AI warnings, but last week it seemingly landed differently, upping pressure on Washington to stay ahead even as some question the extent of the latest threat. "A bunch of people in the [Trump] administration are coming to the realization" AI development has not plateaued as some officials predicted last summer, Dean Ball, the co-author of the Trump White House AI Action plan, told The Hill in an interview Monday. "They are realizing, 'My goodness, I'm going to have to jump in here and get involved and get my hands dirty' because this is not being handled,'" said Ball, adding, "The administration was not prepared to deal with this, that's just the frank reality." Anthropic announced last week it will hold back the full release of Claude Mythos Preview, claiming the model is too dangerous for the public at this stage. The model was released to a small group of technology firms and critical software builders, which will use the model in their defensive security work and share their findings with Anthropic under a new initiative called Project Glasswing. Partners including Cisco, Google, and Palo Alto Networks came out in support of the project, with the latter company calling it a "game changer" for finding hidden vulnerabilities. Mythos has already found thousands of high-security vulnerabilities, some of which date back more than two decades, according to Anthropic. While the tool can help governments find these vulnerabilities, it also makes it much easier for hackers to exploit these security gaps. "This time, the threat is not hypothetical," Anthropic's researchers wrote in a Mythos assessment. "Advanced language models are here." Prior to external release, Anthropic briefed senior officials across the U.S. government on Mythos's offensive and defensive cyber capabilities, an Anthropic official confirmed to The Hill on Monday. This includes conversations with the Cybersecurity and Infrastructure Security Agency and the Center for AI Standards and Innovation, the official said. On the same day Project Glasswing was announced, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened a group of Wall Street executives to discuss the cybersecurity concerns, multiple people familiar with the meeting told The Hill. Executives, including Bank of America CEO Brian Moynihan and Goldman Sachs CEO David Solomon, attended the meeting, Bloomberg first reported. The Hill's sources said several of the executives were already in Washington for the Financial Services Forum, an advocacy organization of the country's eight largest banks. Ahead of Anthropic's announcement, Bessent also joined Vice President Vance on a call with a group of technology leaders, including Anthropic CEO Dario Amodei, xAI CEO and former Trump adviser Elon Musk and OpenAI CEO Sam Altman, to discuss the security of xAI models, CNBC reported. "There's definitely a sense of urgency, it's something that is relatively new. The AI has developed a way to find hacks in other software," National Economic Council Director Kevin Hassett told Fox News on Friday. And later last week, The Wall Street Journal reported National Cyber Director Sean Cairncross will lead a group of federal officials to identify security vulnerabilities in critical infrastructure and strengthen government systems against AI exploitation. When asked about this project, an official with the White House said it is "proactively engaging across government and industry, and that the federal government continues to "work with AI companies to ensure their models help secure critical software vulnerabilities." Some in the cybersecurity space welcome the efforts from Washington, saying the model is a "long time coming" as AI rapidly advances. "I don't think anyone can solve this alone," Brad Medairy, the executive vice president of integrated cyber business at Booz Allen Hamilton, told The Hill. "I'm really encouraged, the federal government is being proactive and engaging and taking this seriously." "Our senior-most leaders in government are like, 'let's get after this,'" Medairy added. "Cybersecurity has been top of mind for governments all over the world for many years," said Gil Messing, chief of staff at Check Point, a cybersecurity vendor and government contractor. "But when you add such a powerful toolkit that in the past was either imaginary or only being used by superpowers [and] now it's being commoditized ... the threat is real." Wall Street was similarly receptive to warnings. Solomon told investors on Monday's earnings call that the bank has access to Mythos and is working with Anthropic and other security vendors to bolster security. "Cybersecurity has long been at the core of our business," Solomon said. "This is part of our ongoing capabilities that we have been investing in and are accelerating our investment in. We're aware of Mythos and its capabilities. We have the model. We're working closely with Anthropic and all of our security vendors." Although CEO Jamie Dimon could not attend last week's meeting with Bessent and Powell, JPMorgan Chase was also publicly listed as a partner for Project Glasswing. While players in Washington and on Wall Street were quick to react, some in the Trump administration's orbit remain skeptical over Anthropic's claims. David Sacks, who recently ended his role as the White House's AI and cryptocurrency czar, posted numerous times over the weekend about his hesitations. "Anthropic has proven it's very good at two things. One is product releases, the second is scaring people," Sacks said on the "All-In" podcast last week. "We've seen a pattern in their previous releases ... at the same time they roll out a new model or new model [system] card, they also roll out some study showing the worst possible implication where the technology could lead." In a post on the social platform X, Sacks wrote, "The world has no choice but to take the cyber threat associated with Mythos seriously. But it's hard to ignore that Anthropic has a history of scare tactics." He attached a screenshot of what appears to be an AI-generated list of links where Amodei highlighted "alarming AI risk narratives frequently tied to model launches." Katie Miller, a previous employee of Musk and the Department of Government Efficiency and the wife of top White House aide Stephen Miller, added on X, "It sounds like Anthropic is running a giant public relations scheme to manipulate industry fears. A playbook Dario has used in the past." Ball, who left the White House last August, said there is a "real dissonance" in the administration over AI's capabilities. Even as it continues to release some of the most-used AI products, Anthropic has tried to set itself apart in the AI space, often discussing safety and the risks of the technology on national security and the workforce. This mission has put Anthropic at odds with the Trump administration, which is forging ahead with a pro-innovation and light-regulation agenda for the technology. Tensions reached a boiling point earlier this year, when the Pentagon cut ties with the AI firm after the company requested certain assurances over how the government and military can use AI. Looking forward, tech industry observers predict other companies could soon follow Anthropic's suit and release similar types of products. "You see coalescence around these new technology trends ...there's safety in numbers," said David Menninger, the executive director of software research for ISG, an AI-centered tech research and IT advisory firm. "You rarely want to be the only one consuming a certain type of technology. Do other large language model providers start to provide similar types of capabilities that would compete with some of these things that Mythos is offering?" he added.
[22]
Goldman Is Using Mythos, Working With Anthropic on Cyber Risks
Goldman Sachs is working closely with Anthropic and security vendors, and is aware of the model's capabilities, with CEO David Solomon saying "cybersecurity has long been at the core of our business". Goldman Sachs Group Inc. is "supplementing" its cyber and infrastructure resilience after regulators warned the largest US banks about the latest artificial intelligence model from Anthropic PBC, Chief Executive Officer David Solomon said. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned Wall Street leaders last week to an urgent meeting on concerns that Mythos, the latest AI model from Anthropic, will usher in an era of greater cyber risk. "Cybersecurity has long been at the core of our business," Solomon said Monday on an earnings conference call with analysts. "This is part of our ongoing capabilities that we have been investing in and are accelerating our investment in. We're aware of Mythos and its capabilities. We have the model. We're working closely with Anthropic, and all of our security vendors." Wall Street banks including Goldman have started to test the Mythos model internally, Bloomberg News previously reported. Government officials didn't raise any specific threat to financial institutions and more generally encouraged the banks to run the model against their own systems to improve their own defenses. Many of the executives called to last week's meeting, including Solomon, were in Washington already for a meeting of the Financial Services Forum, an advocacy group made up of the biggest lenders. "By the way, it is not the first meeting that that group has gone over to Treasury to talk about cybersecurity risks over a number of years," Solomon said Monday. "This is something the industry is focused on. It's something we're focused on. And there's nothing new in that focus."
[23]
Wall Street CEOs' First Reactions to Anthropic's Mythos
As banking leaders tout A.I.'s potential, they warn its power is also creating complex new vulnerabilities. Banks are among the most enthusiastic adopters of A.I., but also the most exposed to the technology's growing cybersecurity threats. That vulnerability came into sharper focus earlier this month with the release of Anthropic's Mythos Preview, a highly advanced A.I. model that's drawn concern across Wall Street. On earnings calls, JPMorgan Chase CEO Jamie Dimon and Goldman Sachs' David Solomon said they are testing Mythos to better understand the new risks that come with rapid advances in A.I. Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters "A.I.'s made it worse, it's made it harder," Dimon told analysts today (April 14). "While we're trying to get the benefit of A.I., we're also very cognizant of the risks." Those risks are central to Mythos, which Anthropic describes as too dangerous to release publicly because of its ability to exploit vulnerabilities in critical software. Instead, the company has invited a consortium of major businesses, including JPMorgan, to test the model internally for use in strengthening their cybersecurity defenses. The preview effort, called Project Glasswing, takes its name from the glasswing butterflies, which use transparent wings to hide in plain sight -- a metaphor Anthropic says reflects how hidden cyber weaknesses can evade detection. The initiative, which includes other Wall Street banks as well as Apple, Google and Nvidia, will be funded by $100 million in model usage credits from Anthropic. The release of advanced models like Mythos has created "additional vulnerabilities" beyond banks, Dimon said. "Banks, of course, are attached to exchanges and all these other things that create other layers of risks. It's a complex one." Dimon, who has led JPMorgan for two decades, emphasized that cybersecurity remains a top priority for the nation's largest bank by assets and market capitalization. "We spend a lot of money. We've got top experts. We're in constant contact with the government." Following the Mythos release, U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened leading Wall Street executives in Washington, D.C. last week to discuss the newfound threats posed by the model. While Dimon was reportedly unable to attend, peers including Bank of America's Brian Moynihan, Citigroup's Jane Fraser, Morgan Stanley's Ted Pick and Wells Fargo's Charlie Scharf joined the meeting. Officials of foreign central banks, including the Bank of Canada and the Bank of England, are hosting similar briefings with top financial leaders. Solomon also attended the Treasury meeting. Goldman Sachs, he said during his bank's earnings call yesterday (April 13). "Obviously, the LLMs are making rapid progress," Solomon told analysts. "We're hyper-aware of the enhanced capabilities of these new models." Despite their caution, both Dimon and Solomon remain confident that A.I. will ultimately transform banking for the better. JPMorgan has already applied A.I. to more than 500 use cases, while Goldman has used it in coding and translation and earlier this year partnered with Anthropic to incorporate its Claude model across accounting and compliance. "It will not be a straight line whenever you have acceleration in new technology," said Solomon. "There are going to be bumps, and there are going to be risk issues, and there are going to be recalibrations."
[24]
Can Anthropic Mythos AI detect hidden financial cyber threats before attacks, and how Wall Street banks test next-gen cybersecurity defense systems today
Anthropic Mythos AI is now reshaping Wall Street cybersecurity at an unprecedented scale. Banks are testing this advanced AI to detect hidden financial cyber threats early. Reports show thousands of zero-day vulnerabilities already uncovered across major systems. This changes how financial institutions approach cyber risk and data protection. Mythos AI scans faster than any human team. It predicts attack paths before hackers act. That gives banks a critical time advantage. But fixing these flaws remains a challenge. Security teams face pressure to respond quickly. This shift signals a new era in AI-driven cybersecurity.
[25]
US Treasury Secretary warns bank CEOs on Anthropic's new AI model
This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community. The US government was briefed on the Mythos model ahead of its public launch last week, on its "offensive and defensive cyber capabilities". On Tuesday, US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell met with bank CEOs in Washington DC to discuss the dangers of the new model, sources reported to Bloomberg. The meeting was convened to warn banks on the risks posed by Mythos. The CEOs that were reportedly present at the meeting were Citigroup, Morgan Stanley, Bank of America, Wells Fargo, and Goldman Sachs. Enterprises using Claude's agentic AI are running into challenges such as coordinating multistep workflows, maintaining compliance, and integrating into CRM and database systems. In February 2026, Anthropic launches Claude Cowork, a software designed to automate work in financial services.
[26]
Bessent, Powell Warn Bank CEOs About Anthropic Model Risks, Bloomberg News Reports
Bessent, Powell Warn Bank CEOs About Anthropic Model Risks, Bloomberg News Reports U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an urgent meeting with bank CEOs this week to warn of cyber risks posed by Anthropic's latest AI model, two sources familiar with the matter said on Thursday. Anthropic launched the powerful Mythos model earlier this week but stopped short of a broad release, citing concerns it could expose previously unknown cybersecurity vulnerabilities. The company said the model was capable of identifying and exploiting weaknesses across "every major operating system and every major web browser". The meeting, held at the Treasury Department in Washington on Tuesday, was aimed at ensuring banks are aware of the potential risks posed by Mythos and similar models, and are taking steps to defend their systems, one of the sources said. The invitation came while most CEOs of the largest U.S. banks were already in Washington to attend other meetings, one of the sources said. Access to Mythos will be limited to about 40 technology companies, including Microsoft MSFT.O and Google GOOGL.O, and Anthropic has been in ongoing talks with the U.S. government about the model's capabilities, the startup has said. Bloomberg News, which first reported the matter on Thursday, said the CEOs of Citigroup, Morgan Stanley, Bank of America, Wells Fargo and Goldman Sachs were present. JPMorgan CEO Jamie Dimon was unable to join, one of the sources told Reuters. Goldman Sachs and the Federal Reserve declined to comment. The Treasury, lenders and Anthropic did not immediately respond to Reuters' requests for comment. (Reporting by Saeed Azhar in New York, Carlos MƩndez in Mexico City; Writing by Chris Thomas; Editing by Sumana Nandy)
[27]
Bessent summons bank executives over Anthropic cyber risk
Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell brought together a group of bank executives this week to discuss cybersecurity concerns in the wake of Anthropic's new Mythos model, multiple people familiar with the meeting told The Hill. The meeting, held at the Treasury Department on Tuesday, came as Anthropic announced its latest artificial intelligence model, Claude Mythos Preview, will be held back from public release because its capabilities are too advanced and risky to end up in bad actors' hands. Several of the executives were already in Washington for the Financial Services Forum, an advocacy organization of the country's eight largest banks, The Hill's sources said. Bloomberg, which first reported the meeting, said Bessent and Powell convened the group to warn of the possible risks the Mythos model could present to banks, citing anonymous people familiar with the matter. Reuters reported the meeting sought to ensure the banks were taking steps to defend their systems. Anthropic said this week it has been in discussions with US government officials about Claude Mythos Preview and "its offensive and defensive capabilities." Those attending the meeting included Bank of America CEO Brian Moynihan, along with Citi Group CEO Jane Fraser, Morgan Stanley CEO Ted Pick, Goldman Sachs Group CEO David Soloman and Wells Fargo & Co. Charles Scharf, Bloomberg reported. JP Morgan CEO Jamie Dimon was not able to attend, one of the people familiar confirmed to The Hill. Representatives for the banks and the Federal Reserve declined to comment. The Treasury Department did not immediately respond to The Hill. The Claude Mythos Preview model will be available to a select group of technology firms including Apple, CrowdStrike, Amazon Web Services, and Microsoft, according to Anthropic. These companies, aligned with more than 40 organizations that build or manage critical software infrastructure, will use Mythos Preview in their defensive security work and findings will be shared by Anthropic with the whole industry. The consortium is part of Anthropic's new initiative, Project Glasswing, which was formed after the company discovered the capabilities of Mythos Preview.
[28]
Anthropic wins accolades from Canada's AI minister over Mythos approach
A Canadian Cabinet minister has praised Anthropic's decision to introduce its Mythos model to select companies and allow them to test the technology before releasing it more widely. "Working with defenders first, rather than releasing this new model broadly, is the responsible path and gives people protecting critical systems a head start," Evan Solomon, Canada's minister responsible for artificial intelligence, said Tuesday after meeting with officials from the AI company. Anthropic has warned that Mythos is powerful enough that it may be capable of cyberattacks if companies don't try it against their own systems and build defenses ahead of any wider release. The San Francisco-based company has limited access to a small number of firms initially, including JPMorgan Chase & Co., Amazon and Apple. They're all part of "Project Glasswing," which will work to secure the most important systems before similar AI models become available.
[29]
Scott Bessent Sounded The Alarm On AI Threats -- Now The Bank Of England And UK Treasury Are Holding Urgen
UK Regulators Assess AI-Exposed Cyber Risks The discussions focus on "Claude Mythos Preview," which has demonstrated an advanced ability to identify vulnerabilities across operating systems, browsers and critical infrastructure, Financial Times reported on Sunday. This has raised concerns about potential exploitation by malicious actors. Banks, Insurers To Be Warned On AI Vulnerabilities Major U.K. banks, insurers and financial market infrastructure firms are expected to be briefed in the coming weeks on the potential cybersecurity threats, the report said, citing two sources briefed on the talks. The issue is also set to feature in an upcoming meeting of the Cross Market Operational Resilience Group, which coordinates responses to systemic risks across the financial sector. Authorities are assessing whether these AI capabilities could expose weaknesses in legacy systems widely used by financial institutions, potentially increasing the risk of large-scale cyberattacks. Anthropic did not immediately respond to Benzinga's request for comments. Global Alarm After US Treasury Steps In The U.K. response follows action by U.S. Treasury Secretary Scott Bessent, who earlier this week convened top Wall Street bank executives to discuss similar concerns around AI-driven cyber threats. Anthropic has also said its latest model has already identified thousands of high-severity vulnerabilities, some of which may have gone undetected for years. Anthropic said it does not intend to release the model for broad public use. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[30]
US summons bank chiefs over AI cyber risks from Anthropic's latest AI model
US officials have summoned top banking executives to Washington amid growing concerns over cybersecurity risks linked to the new AI model developed by Anthropic. The meeting, reportedly led by Treasury Secretary Scott Bessent, comes shortly after the unveiling of the company's latest system, Claude Mythos. Among those attending is Jerome Powell, alongside executives from major financial institutions including Goldman Sachs, Bank of America and Citigroup. The gathering focuses on the potential risks posed by advanced AI tools capable of identifying and exploiting software vulnerabilities at a level comparable to, or exceeding, human experts. Anthropic recently warned that its model had uncovered thousands of previously undetected weaknesses in widely used systems, raising fears that such technology could be misused by hackers. A leak of the model's code earlier this month intensified concerns, with the company acknowledging that the broader impact on economies and national security could be significant. In response, Anthropic has restricted access to the model to a limited group of companies, including major tech firms and infrastructure organisations. This marks the first time the company has limited a product release.
[31]
Bank of England Set to Discuss Anthropic's Mythos With Banks
The Bank of England plans to discuss the impact of Anthropic PBC's new AI model with financial institutions, as UK regulators join their peers in the US and elsewhere in raising alarms over the risks posed by the tool. Anthropic's Mythos model will be on the agenda for the BOE's next Cross Market Operational Resilience Group and CMORG AI Taskforce meetings, scheduled within the next two weeks, according to a person with knowledge of the matter. The meetings, earlier reported by the Telegraph, will include representatives from the Treasury, the Financial Conduct Authority and the National Cyber Security Centre. Wall Street banks have started testing Mythos internally after the Trump administration this week warned executives they should take the model seriously and deploy its capabilities to detect vulnerabilities. The Bank of Canada on Friday also met with banks and financial firms to discuss the cybersecurity risks posed by Mythos. The meetings reflect growing concern among regulators that a new breed of cyberattacks is one of the biggest risks facing the financial industry. Anthropic's Mythos is a sophisticated new model capable of identifying and exploiting vulnerabilities in every major operating system and web browser, the company has said.
[32]
What AI-Driven Attack Chains Mean for CFOs and CISOs | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. Newly introduced frontier models from OpenAI and Anthropic threaten the capability to exploit software vulnerabilities at a scale no human team can match. A new evaluation released Monday (April 13) by the U.K. Government's AI Security Institute (AISI) conducted evaluations of Anthropic's Claude Mythos Preview and found it to have successfully crossed into the early stages of operational cyber capability. In simulated environments, the model did not simply execute isolated commands. It stitched together reconnaissance, exploitation, persistence and lateral movement into a coherent attack sequence. It adjusted its approach when steps failed, and maintained continuity across stages that traditionally required human oversight. Historically, complex cyberattacks have been constrained by talent. Skilled operators are expensive, scarce, and often tied to a nation or well-funded criminal groups. The latest models by the world's biggest AI providers could represent a crucial cybersecurity inflection point. AI is no longer just a tool in the hands of an attacker; it is beginning to replicate aspects of the attacker itself. Read more: AI Is Cracking Open Banking Before Quantum Gets the Chance For CFOs and CISOs alike, the implication is an increasingly stark one. Cyber risk is shifting from a targeted phenomenon to something closer to ambient exposure. Organizations are not just selected; they are continuously scanned, probed and tested by systems operating at scale. The median enterprise, the one with uneven patching, over-permissioned accounts, and inconsistent configuration management, is now more accessible to multistep intrusion attempts that can be executed, or at least orchestrated, by AI systems. Still, the most important takeaway from the Mythos evaluation is not that AI can already execute flawless cyberattacks. It cannot. As the U.K. report noted, the success rate is partial, the model's capabilities are constrained, and its deployment remains controlled. But systems that can plan and execute multistage intrusions, even inconsistently, represent a baseline that will improve. More compute, better orchestration, and tighter integration with external tools will incrementally close the gap between partial and reliable capability. For CISOs, that means designing for a world where sophistication is no longer rare. For CFOs, it means recognizing cyber risk not as an occasional disruption, but as a persistent, evolving cost of doing business in a digitized economy. The baseline has moved. The question is who will move with it.
[33]
Banks Put On Alert As Powerful Anthropic AI Raises Cybersecurity Fears - Bank of America (NYSE:BAC), Citi
U.S. officials reportedly warned major banks about a powerful new artificial intelligence system that could expose critical cybersecurity weaknesses. The alert came during a closed-door meeting involving top regulators and banking executives in Washington, The New York Times reports, raising concerns about emerging AI-driven threats. Government Officials Flag Rising AI Risks Federal Reserve Chair Jerome H. Powell also attended the discussion. Officials focused on growing cyber risks tied to advanced artificial intelligence systems. Authorities warned that new AI models could uncover software vulnerabilities faster than traditional security methods. This capability could create opportunities for malicious actors. Anthropic Model Raises Security Concerns The warnings centered on a new model from Anthropic called Claude Mythos Preview. The company said the system can detect hidden software flaws beyond human capability. Officials cautioned that such tools could become dangerous if accessed by hackers. They stressed that sensitive financial data could face increased exposure risks. Anthropic acknowledged these risks and limited access to the model. The company created a restricted initiative called "Project Glasswing" involving around 40 organizations. Banks Begin Controlled Testing JP Morgan Chase & Co. (NYSE:JPM) joined the initiative to test the model. The bank said it would evaluate AI tools for defensive cybersecurity applications. CEO Jamie Dimon did not attend the meeting due to prior commitments. However, the bank remains involved in early-stage testing efforts. Officials emphasized urgency in addressing AI-related threats across financial systems. Kevin A. Hassett said, "We're taking every step we can to make sure that everybody is safe from these potential risks, including Anthropic agreeing to hold back the public release of the model until our officials have figured everything out." Policy Tensions Add Complexity The U.S. government and Anthropic are currently engaged in a legal dispute. The Defense Department labeled the company a "supply chain risk." This designation followed disagreements over restrictions on military use of AI technology. The situation highlights growing tensions between innovation and national security priorities.' Photo courtesy: Shutterstock This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[34]
Did Trump Officials Urge Banks to Test Anthropic's Mythos Model?
If so, it would be surprising as the two entities are battling lawsuits over the Trump regime designating Anthropic as a supply chain risk After knocking off Anthropic from all government contracts, the Trump administration now seems to have smoked the peace pipe with the Claude maker. Or at least started the process of doing so by suggesting that bank executives in the United States to use the new Mythos AI model to detect vulnerabilities in the system. Reporting about this ostensible change of heart, Bloomberg wire agency says it was Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell who made this suggestion during a meeting with bank executives. When Anthropic had announced that it was forming a partnership with several top tech companies to test out security vulnerabilities using Mythos, the only representative from the banking sector happened to be JPMorgan Chase. However, the report claims that Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley also testing Mythos. What makes the report intriguing is that Anthropic is battling the Trump administration in court over the Department of Defence designating the company as a supply-chain risk and booting it out of a deal with the Pentagon worth $200 million in revenues. Immediately after this incident, OpenAI's Sam Altman pitched in and turned the business his company's way. Maybe this latest move is a result of what transpired across the pond in Britain. Financial regulators in the UK spoke with the government's cybersecurity agency and major banks to assess risks posed by Anthropic's latest model. Media reports said representatives of banks, insurers and exchanges are expected to be briefed about Mythos next fortnight. Last week, Anthropic announced that it was having a limited debut for Mythos for a small group of partner companies. Described as one of their "most powerful models yet, Anthropic initiated Project Glasswing - a new security initiative where a clutch of big companies would deploy it "to drive defensive security work" for securing critical software. "Claude Mythos Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," the company had said in a post. The fallout -- for economies, public safety, and national security -- could be severe. Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes, the post says adding that among the zero-day vulnerabilities, many are critical and some decades old, the blog post had noted. Anthropic had secured an injunction against the Trump administration's order from judge Rita F. Lin of the Northern District of California on March 27. But the company's efforts to halt the blacklisting came to nought when a Washington judge accepted that while Anthropic would suffer monetary harm, the situation did not warrant an injunction.
[35]
Wall Street Banks Try Out Anthropic's Mythos as US Urges Testing
Wall Street banks are starting to test Anthropic PBC's Mythos model internally as Trump administration officials encourage them to use it to detect vulnerabilities. While JPMorgan Chase & Co. was the only bank named as part of an initiative to test the Mythos model, other major financial institutions have also gained access or expect to in the coming days, according to people familiar with the matter. Goldman Sachs Group Inc., Citigroup Inc., Bank of America Corp. and Morgan Stanley are among the banks testing the technology internally, the people said. Those firms either declined to comment or had no immediate response. During the meeting with Wall Street leaders, summoned by US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell, executives were warned that they should take the Mythos model seriously and deploy its capabilities to detect vulnerabilities, the people said, asking not to be identified because the information isn't public. Government officials didn't raise any specific threat to financial institutions and more generally encouraged the banks to run the model against their own systems to improve their own defenses, they said. Bloomberg reported earlier that Bessent and Powell had assembled the group of banking executives on April 7 at Treasury's headquarters in Washington on short notice to ensure that banks were aware of possible risks raised by Anthropic's Mythos and similar models. The executives were in town already for a meeting of the Financial Services Forum, an advocacy group made up of the biggest lenders. A representative from the Treasury Department didn't respond to a request for comment. A Federal Reserve spokesperson had no immediate comment. The urging by Trump officials underscores the concern growing among regulators that a new breed of cyberattacks is one of the biggest risks facing the financial industry. All the banks summoned to the meeting are classified as systemically important by top regulators, meaning their stability is a priority for the global financial system. Anthropic has said that it has been in discussions prior to its recent release with US officials about Mythos and its "offensive and defensive cyber capabilities." The company has limited the release of Mythos to a few dozen firms initially. Those companies, which include JPMorgan, Amazon.com Inc. and Apple Inc., are part of what's being called "Project Glasswing," which will work to secure the most important systems before other similar AI models become available. In releasing Mythos to a very limited set of companies, Anthropic pointed to several vulnerabilities that the AI system was capable of both identifying and potentially exploiting during testing. None of the examples related specifically to financial institutions, but in one instance, the firm's security team said it was able to compromise a web browser so that a website set up by a hacker could read data from another website "e.g., the victim's bank." Mythos Preview "fully autonomously discovered" a way of reading information stored in "multiple different web browsers" and then used that ability to find ways to exploit them, according to a post from Anthropic's security team. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Plus Signed UpPlus Sign UpPlus Sign Up By continuing, I agree to the Privacy Policy and Terms of Service. In one case, Anthropic said, Mythos found a means of exploiting web browsers that utilized multiple vulnerabilities. That tactic often represents a challenge for human hackers who struggle to find and exploit multiple flaws at once. So-called vulnerability chains can serve as pathways into otherwise highly secure systems, such as in the Stuxnet hack that damaged centrifuges at an Iranian nuclear facility. Anthropic has separately been battling the Trump administration in court. The Pentagon had labeled the company as a supply-chain risk, a designation that Anthropic has opposed. Earlier this week, a federal appeals court declined, at least for now, Anthropic's request that it put a pause to the Pentagon's designation. National Economic Council Director Kevin Hassett said during an interview with Fox News that there's a sense of urgency as US officials push banks to improve their digital defenses with AI technology. "It was appropriate that Secretary Bessent do what he did," he said of the meeting with Wall Street leaders. "We're taking every step we can to make sure that everybody is safe from these potential risks, including Anthropic agreeing to hold back the public release of the model until our officials have figured everything out," he said. In recent years, regulators have required banks to hold some capital tied to the potential for cyberattacks, as well as other so-called operational risks such as lawsuits and rogue employees. Banks have sometimes chafed at those requirements, given that operational risk is more difficult to measure than the market and credit risks that also factor into banks' capital levels.
[36]
Anthropic and OpenAI Just Rewrote the Cybersecurity Playbook | PYMNTS.com
OpenAI followed on Tuesday (April 14) with GPT-5.4-Cyber, deploying its system to thousands of verified defenders through its Trusted Access for Cyber program. Both models can find and exploit software vulnerabilities at a scale no human team can match. What divides them is a fundamental disagreement about what to do with that power. Anthropic's Mythos doesn't assist security teams. It works independently. Given a target and a prompt asking it to find a vulnerability, the model reads code, forms hypotheses, tests them against a running environment and produces a complete exploit without further human input. Anthropic confirmed that these capabilities weren't explicitly trained into the model. They emerged as a downstream consequence of general improvements in code, reasoning and autonomy. The same improvements that make the model more effective at patching vulnerabilities also make it more effective at exploiting them. Mythos was able to find serious security weaknesses that had been hiding in widely used software for years. Some of these flaws had gone unnoticed for over a decade, despite being reviewed many times by experts and existing tools. What stands out is that the AI model found them on its own after a simple prompt, without any ongoing human help. VentureBeat noted that Anthropic engineers with no formal security training asked Mythos to find remote code execution vulnerabilities overnight and woke up to a complete working exploit by morning. On a standardized security test built around real vulnerabilities in Mozilla Firefox, Mythos successfully turned known weaknesses into working exploits 181 times, compared to just two successful attempts by the earlier model. That's a dramatic leap in its ability to both find and act on software flaws. According to Anthropic, that gap drove Anthropic's decision to keep the model out of general circulation. Reuters found that the model's coding ability has given it a potentially unprecedented capacity to identify vulnerabilities and devise ways to exploit them, with the timeline for finding and fixing flaws collapsing from months to seconds. PYMNTS reported that Project Glasswing's partners include cybersecurity firms and infrastructure players, giving them a head start to rewrite insecure legacy code before criminals can act. GPT-5.4-Cyber is built around a different premise. Rather than autonomous operation, it's designed to remove the friction that security professionals hit when using standard AI tools. Axios reported that OpenAI designed the model after some cyber partners said earlier GPT models sometimes refused dual-use security queries outright. The model lets analysts examine compiled software for weaknesses without access to the underlying source code, work that previously required specialized researchers. It's a bet on a different theory of control. SiliconAngle noted that OpenAI shifted away from restricting what models can do and toward verifying who gets access to the most sensitive capabilities. The Trusted Access for Cyber program launched in February alongside a $10 million cybersecurity grant program and now carries tiered verification levels, with higher tiers unlocking more capable tools. The Hacker News detailed that OpenAI expanded access to thousands of authenticated individual defenders and hundreds of teams responsible for securing critical software. Its Codex Security product contributed to fixes on more than 3,000 critical and high-severity vulnerabilities since launch. The two positions reflect a strategic disagreement. Anthropic concluded Mythos was too capable to distribute widely, regardless of who was asking. OpenAI concluded that wider access to properly verified defenders produces better outcomes than scarcity. Financial institutions face a real test. Reuters found that banks are particularly exposed because they run technology stacks spanning both new and decades-old systems, house undiscovered vulnerabilities and are closely interconnected. Costin Raiu, co-founder of cybersecurity firm TLPBLACK, told Reuters that a model like Mythos would have "a field day" finding exploits in certain IBM systems, pointing to legacy technologies powering the financial industry as a prime example.
[37]
Anthropic's Mythos Made Scott Bessent, Jerome Powell All Bank CEOs: But What Do Prediction Markets Say Ab
Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell called the CEOs of every major systemically important U.S. bank to Treasury headquarters on Tuesday to warn them about cybersecurity risks from Anthropic's Claude Mythos model, Why Banks Are Exposed Mythos found thousands of zero-day vulnerabilities across every major operating system and web browser, including a 27-year-old bug in OpenBSD that survived 5 million passes by automated testing tools. That's a problem for Wall Street, where a single breach at a systemically important bank could cascade through the financial system. Many of the largest U.S. banks still run core systems on legacy code dating back decades. JPMorgan's retail banking core reportedly still uses elements of the Hogan system from the 1980s. If Mythos can surface flaws that every existing security tool missed, banks might be one of the more vulnerable sectors. The Cybersecurity Trade Anthropic launched Project Glasswing this week, giving roughly 40 companies access to Mythos for defensive security work. The market appears to be repricing the entire sector around a simple question: if an AI model can find vulnerabilities that elite human researchers missed for decades, what are cybersecurity companies actually selling? Anthropic Keeps Pulling Away Anthropic's annualized revenue reportedly hit $30 billion in April, surpassing OpenAI's roughly $24 billion run rate. The company went from $9 billion at year-end 2025 to $30 billion in about four months, with over 1,000 companies now spending more than $1 million annually on Anthropic products. Polymarket traders give Anthropic a 63% chance of going public before OpenAI, with an October 2026 listing at a $380 billion valuation considered the base case. A separate contract prices a 94% chance Anthropic tops $500 billion in valuation by year-end. Image: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[38]
UK regulators rushing to assess risks of latest Anthropic AI model: report
British financial regulators are holding urgent talks with the government's cyber security agency and major banks to assess risks posed by the latest artificial intelligence model from Anthropic, the Financial Times reported on Sunday. Bank of England, Financial Conduct Authority and Treasury officials are in talks with the National Cyber Security Centre to examine potential vulnerabilities in critical IT systems highlighted by Anthropic's latest AI model, the FT said, citing two people briefed on the talks. Anthropic did not respond to a Reuters' request for comment. The BoE, FCA and NCSC declined to comment. The UK Treasury was not immediately available for comment. Representatives from major British banks, insurers and exchanges are expected to be briefed on the cybersecurity risks posed by the model, Claude Mythos Preview, at a meeting with regulators in the next two weeks, the newspaper said. Reuters could not immediately verify details of the report. Reuters reported on Friday, citing two sources, that US Treasury Secretary Scott Bessent had called a meeting with major Wall Street banks on the model's cyber risk potential. AI startup Anthropic has said the model is being deployed as part of "Project Glasswing," a controlled initiative under which select organizations are permitted to use the unreleased Claude Mythos Preview model for defensive cyber security purposes. In a blog post earlier this month, Anthropic said the model had already identified thousands of major vulnerabilities across operating systems, web browsers and other widely used software.
[39]
Anthropic Mythos Triggers Fresh Cybersecurity Concerns for Major US Banks
Anthropic's new AI model, Claude Mythos Preview, has drawn close attention from U.S. financial officials after the company limited access to a small group of firms. The model was introduced through Anthropic's Project Glasswing program, which focuses on defensive cybersecurity work. U.S. officials then held meetings with banks and technology leaders to review potential cyber risks associated with advanced AI systems. US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell met with major bank executives in Washington this week to discuss cyber threats tied to advanced AI tools. The meeting focused on how banks should prepare for systems capable of detecting software weaknesses at high speed. Kevin Hassett, director of the White House National Economic Council, later said Bessent and Powell went through the cyber risks with bank leaders so they understood the issue clearly. The session showed that financial authorities are treating AI-related as an urgent matter. The concern centers on the banking system's heavy dependence on software, digital records, and networked operations. Any weakness in those systems can create operational problems, service outages, or risks to customer accounts if attackers gain an advantage.
[40]
Canada's major banks, regulators discuss risk posed by new AI model
Canadian bank executives and regulators met Friday to discuss the risks posed by Claude Mythos Preview, Anthropic's new AI model, which the company says is so powerful, they're choosing not to release it to the public. While the Canadian Financial Sector Resiliency Group (CFRG) meets regularly, a spokesperson with the ministry of finance says this meeting was "hastened" by the release of Mythos. This comes on the heels of a similar meeting called by U.S. Treasury Secretary Scott Bessent that included the chief executives of the largest U.S. banks. Members of the CFRG include the Bank of Canada, Department of Finance, the Canadian Deposit Insurance Corporation (CDIC) and Canada's six major banks. In announcing its new model, Anthropic warned Claude Mythos can quickly find vulnerabilities in virtually every major operating system and web browser using relatively simple prompts, putting major institutions like banks, hospitals and energy infrastructure at risk of cyberattacks. "This is the kind of thing that should keep us up at night," tech expert Carmi Levy told CTV News. "We knew this day would come, that AI would get so good that it would be able to break into even the most hardened cyber defences. And that's kind of what we're at with Anthropic's Claude Mythos model." In making its decision not to release Claude Mythos to the public, Anthropic said "given the rate of AI progress it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout -- for economies, public safety, and national security -- could be severe." "It's like using ChatGPT to be a hacker," said Levy. "It means that our existing defences are no longer sufficient to protect ourselves against these newly emerging threats." While Anthropic is not releasing Claude Mythos to the public, it is giving it to a select group of tech companies including Amazon, Google, Apple, Microsoft, Nvidia and Cisco as part of Project Glasswing, an initiative aimed at identifying critical software vulnerabilities and shoring up cyber defences. While some online have called Mythos "AI doomsday," others caution Anthropic's announcement could be a savvy marketing strategy in the race for AI dominance. It could also be both. "Many people would be right in saying that this is a little bit of hype, a little bit of press release, a little bit of publicity stunt," said Claudiu Popa, a cybersecurity expert. "But we certainly need to acknowledge the capability of that tool and we need to start preparing for such a time when there will be lots of AIs scouring the internet, looking for vulnerabilities." Popa says one of the most worrisome elements of Claude Mythos is that it does not require a sophisticated level of understanding of cybersecurity making it potentially harmful in the hands of bad actors. "That's why this acts as not a reason to practice fear mongering, but to raise the level of awareness and concern amongst organizations," said Popa. Experts estimate it will be a matter of months before Mythos or a comparable AI model will be available to the public, calling this a "call to action" for organizations and governments to beef up cybersecurity. "Make sure that there are as few vulnerabilities as possible," he said. "Ultimately, if it takes forever for one of these tools to discover a vulnerability ... then it will move on and find some easier targets."
[41]
Treasury Department Wants Access to Anthropic's Mythos | PYMNTS.com
The department's technology team is hoping to look closer at the artificial intelligence (AI) model to seek out vulnerabilities, Bloomberg News reported Tuesday (April 14), citing a source familiar with the situation. Treasury Chief Information Officer Sam Corcos hopes to get access to the model, which Anthropic has been rolling out to a limited number of institutions, as soon as this week, the source told Bloomberg. This person added that Corcos had discussed the technology with the Treasury's cybersecurity team last week and directed staff to be ready for threats from powerful AI systems. As Bloomberg noted, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell last week summoned Wall Street leaders to an urgent meeting amid fears that Mythos will bring about an era of increased cyber threats. Anthropic has cautioned that the model may be capable of powering cyberattacks if companies don't test it against their own systems and create defenses. The Bloomberg report also noted that the government is seeking access to Anthropic despite the Pentagon deeming the company a supply chain risk earlier this year. That designation came after a dispute with Anthropic over how its AI technology might be used by the military, with the Defense Department giving Anthropic six months to hand over AI services to another provider. Anthropic is contesting the designation in federal court. Writing about this issue earlier this week, PYMNTS argued that the rise of AI-driven vulnerability discovery may challenge some key assumptions about financial stability, including the idea that cyber risks are secondary to economic factors like liquidity crises. The challenge facing the financial sector is two-fold, the report added. On one hand, institutions must understand and quantify the threats posed by AI-driven vulnerability discovery. At the same time, they need to adapt their security architectures, processes and governance models to function in an environment where the pace of threat evolution has increased dramatically. "The involvement of the White House and leading banks signals that this shift is being taken seriously at the highest levels," that report said. "But awareness is only the first step. The real test will be whether the industry can move quickly enough to harness AI's defensive potential while mitigating its risks. In the race between attackers and defenders, speed has always mattered. With the advent of frontier AI, speed may become the defining factor."
[42]
Bank of Canada, Major Lenders Meet on Anthropic AI Cyber Risk
The Bank of Canada and the country's major banks and financial firms met Friday to discuss cybersecurity risks raised by Anthropic PBC's latest artificial intelligence model. The gathering followed a similar move by US policymakers earlier in the week. Bloomberg News reported Thursday that US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned Wall Street leaders for an urgent discussion about Anthropic's Mythos and similar AI models. The executives included Citigroup Inc.'s Jane Fraser and Goldman Sachs Group Inc.'s David Solomon. The Canadian meeting involved members of a body known as the Canadian Financial Sector Resiliency Group. It includes representatives from the six largest domestic banks, the federal Finance department, financial regulatory agencies, the parent company of the Toronto Stock Exchange and other firms. It's another signal of the growing concern among regulators globally that more powerful AI models will lead to a new breed of cyber attacks against the financial industry. A spokesperson for Finance Minister FranƧois-Philippe Champagne confirmed the meeting took place on Friday. "The Bank of Canada is aware of this issue. We take cybersecurity very seriously," Paul Badertscher, a spokesperson for the central bank, said by email. The CFSRG's mandate is to "enhance the operational resilience of Canada's critical financial sector."
[43]
White House Tells Banks to Use Anthropic to Spot Vulnerabilities | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. That's according to a report Friday (April 10) from Bloomberg News, which points out that these tests are happening as the White House is encouraging banks to use Mythos to identify vulnerabilities. JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America and Morgan Stanley are among the banks testing the tool internally, the report added, citing sources familiar with the matter. These sources added that U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell had held a meeting last week with Wall Street executives, warning them that they should take the Mythos model seriously and use it to uncover holes in their defenses. A Treasury spokesperson told PYMNTS last week that the Trump administration plans to hold several more meetings with regulators and institutions on an ongoing basis, addressing AI and other issues. "President Trump and the Administration are continuing to engage on AI security in a thoughtful manner," the spokesperson said via email. "The White House has been leading an ongoing core interagency taskforce, which includes the Treasury, that has been proactively engaging across the government and industry to execute the first phases of a plan to ensure the United States and Americans are protected." Anthropic last week debuted a program called Project Glasswing that will allow select partners to gain early access to Claude Mythos. As covered here, this model is being positioned especially for defensive cybersecurity work, and the initiative is designed to let partners to identify vulnerabilities and shore up systems before threats can be exploited. In related news, PYMNTS last week covered the launch of Anthropic's Claude Managed Agents, an offering that allows enterprises to deploy AI directly within its platform. With this roll out, the company is targeting a problem that companies have spent the last year attempting to solve: getting AI agents to reliably perform within production environments. "That's proven harder than expected," PYMNTS wrote. "Enterprises experimenting with agents have run into a familiar set of challenges, from maintaining context across sessions to coordinating multistep workflows, integrating with internal systems like CRMs and databases and staying within strict security and compliance guardrails." Most companies have not developed that infrastructure on their own, that report added. Instead, they've reached out to a growing category of startups offering orchestration layers, workflow engines, and agent management tools, an area that Claude Managed Agents is now tackling more directly.
[44]
Washington warns banks of cyber risks linked to Anthropic's AI
According to reports from CNBC, Federal Reserve Chair Jerome Powell and Treasury Secretary Scott Bessent met with executives from leading US banks to assess the risks associated with the Mythos artificial intelligence model developed by Anthropic. This meeting, held on the sidelines of a financial forum in Washington, reflects mounting concerns regarding the ability of such systems to facilitate sophisticated cyberattacks. Anthropic recently launched a limited version of its model, Claude Mythos Preview, specifically due to fears of potential malicious use. Authorities are seeking to anticipate the impact of these technologies on sensitive sectors such as finance, where AI is increasingly integrated into critical operations. In response to these challenges, several major corporations, including JPMorgan, Apple, Google, Microsoft, and Nvidia, are participating in initiatives such as Project Glasswing to manage these risks. This mobilization underscores the need for strengthened dialogue between public authorities and private sector players to secure the deployment of increasingly powerful AI models.
[45]
Anthropic Model Scare Sparks Urgent Bessent, Powell Warning to Bank CEOs
Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned Wall Street leaders to an urgent meeting on concerns that the latest artificial intelligence model from Anthropic PBC will usher in an era of greater cyber risk. Bessent and Powell assembled the group at Treasury's headquarters in Washington on Tuesday to make sure banks are aware of possible future risks raised by Anthropic's Mythos and potential similar models, and are taking precautions to defend their systems, according to people familiar with the matter who asked not to be identified citing the private discussions. A representative for the Treasury didn't immediately respond to a request for comment. A spokesperson for the Fed declined to comment. The previously unreported meeting, arranged on short notice, is another sign that regulators consider the possibility of a new breed of cyber attacks as one of the biggest risks facing the financial industry. All the banks summoned to the meeting are classified as systemically important by top regulators, meaning their stability is a priority for the global financial system. Anthropic's Mythos is a more powerful system that the AI firm has said is capable of identifying and then exploiting vulnerabilities in every major operating system and web browser when directed by a user to do so. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Get the Tech Newsletter bundle. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Bloomberg's subscriber-only tech newsletters, and full access to all the articles they feature. Plus Signed UpPlus Sign UpPlus Sign Up By continuing, I agree to the Privacy Policy and Terms of Service. Regulators' caution about the power of the model in hackers' hands echoes Anthropic's own prudence. Anthropic has limited the release of it to just a few major technology and finance firms at first. Those companies, which include Amazon.com Inc. and Apple Inc. as well as JPMorgan Chase & Co., are part of "Project Glasswing," which will work to secure the most important systems before other similar AI models become available. Anthropic has said that it has been in discussions prior to its recent release with US officials about Mythos and its "offensive and defensive cyber capabilities." Chief executive officers summoned to the meeting with the Fed and Treasury include Citigroup Inc.'s Jane Fraser, Morgan Stanley's Ted Pick, Bank of America Corp.'s Brian Moynihan, Wells Fargo & Co.'s Charlie Scharf, and Goldman Sachs Group Inc.'s David Solomon, said the people. JPMorgan's Jamie Dimon was unable to attend, the people said. Spokespeople for the banks declined to comment. A representative for Anthropic had no immediate comment. Anthropic has separately been battling the Trump administration in court. The Pentagon had labeled the company as a supply-chain risk, a designation that Anthropic has opposed. Earlier this week, a federal appeals court declined, at least for now, Anthropic's request that it put a pause to the Pentagon's designation.
[46]
Feds Warn Major Banks of Anthropic Mythos Cyber Threat | PYMNTS.com
Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell arranged the meeting at Treasury's headquarters in Washington to ensure the banks are aware of the potential risks and are taking precautions, according to the report. Many of the CEOs were already in Washington to attend a meeting of the Financial Services Forum, per the report. Reached by PYMNTS, the Federal Reserve declined to comment on the report. The Treasury Department did not immediately reply to PYMNTS' request for comment. According to the report, Anthropic has limited the release of Mythos to a few major firms so they can secure their systems before similar AI models are released. It was reported Tuesday (April 7) that Anthropic unveiled a program called Project Glasswing that will allow select partners to gain early access to "Claude Mythos Preview." The initiative includes participation from leading companies such as Amazon, Microsoft and Apple, alongside cybersecurity and infrastructure players like Crowdstrike, Palo Alto Networks, Google and Nvidia. The model is being positioned specifically for defensive cybersecurity work, and the initiative is meant to allow partners to identify vulnerabilities and strengthen systems before threats can be exploited, according to the report. In its announcement of Project Glasswing, Anthropic said: "Anthropic has also been in ongoing discussions with US government officials about Claude Mythos Preview and its offensive and defensive cyber capabilities." The company also said in the announcement: "We are hopeful that Project Glasswing can seed a larger effort across industry and the private sector, with all parties helping to address the biggest questions around the impact of powerful models on security." It was reported March 26 that an accidental data leak forced Anthropic to confirm that it is testing an unreleased AI model called Claude Mythos that has capabilities that exceed any system it has previously released. A day later, on March 27, it was reported that concerns about the capabilities of an AI model being tested by Anthropic drove down cybersecurity stocks. In November, Anthropic reported that another of its AI models had been manipulated into carrying out a wide-reaching cyber-espionage operation.
[47]
Bessent, Powell warned bank CEOs about Anthropic model risks, sources say
April 9 (Reuters) - U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an urgent meeting with bank CEOs this week to warn of cyber risks posed by Anthropic's latest AI model, two sources familiar with the matter said on Thursday. Anthropic launched the powerful Mythos model earlier this week but stopped short of a broad release, citing concerns it could expose previously unknown cybersecurity vulnerabilities. The company said the model was capable of identifying and exploiting weaknesses across "every major operating system and every major web browser". The meeting, held at the Treasury Department in Washington on Tuesday, was aimed at ensuring banks are aware of the potential risks posed by Mythos and similar models, and are taking steps to defend their systems, one of the sources said. The invitation came while most CEOs of the largest U.S. banks were already in Washington to attend other meetings, one of the sources said. Access to Mythos will be limited to about 40 technology companies, including Microsoft and Google, and Anthropic has been in ongoing talks with the U.S. government about the model's capabilities, the startup has said. Bloomberg News, which first reported the matter on Thursday, said the CEOs of Citigroup, Morgan Stanley, Bank of America, Wells Fargo and Goldman Sachs were present. JPMorgan CEO Jamie Dimon was unable to join, one of the sources told Reuters. Goldman Sachs and the Federal Reserve declined to comment. The Treasury, lenders and Anthropic did not immediately respond to Reuters' requests for comment. (Reporting by Saeed Azhar in New York, Carlos MƩndez in Mexico City; Writing by Chris Thomas; Editing by Sumana Nandy)
Share
Copy Link
Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned major bank CEOs to discuss cyber risks posed by Anthropic's Mythos AI model. The model, deemed too dangerous for public release, can autonomously exploit software vulnerabilities across major systems, prompting urgent defensive measures across the financial sector.
Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an urgent meeting with bank executives earlier this week to address cyber risks posed by Anthropic Mythos, the AI company's latest model
1
3
. The gathering at the Treasury Department in Washington included leaders from Bank of America, Citigroup, Goldman Sachs, Morgan Stanley, and Wells Fargo, with JPMorgan Chase CEO Jamie Dimon invited but unable to attend4
. The message from Scott Bessent and Jerome Powell was clear: use Mythos to find weaknesses in your systems immediately. Executives who attended refused to share details even with their top advisers, underscoring the gravity of the discussion .
Source: The Hill
Anthropic announced this week it would limit access to Mythos, marking the first time the company has restricted a model launch. The decision came after internal testing revealed the AI model could autonomously exploit software vulnerabilities rather than simply identify them . AI researcher Nicholas Carlini, who stress-tests Anthropic's models, discovered within hours that Mythos could create powerful break-in tools targeting Linux, the open-source code underpinning most modern computing. "AI had picked locks, but now it could pull off an entire heist," according to the report . Logan Graham, who runs Anthropic's Frontier Red Team, noted that while a previous model showed it could help people exploit vulnerabilities, Mythos could exploit them independently.

Source: Analytics Insight
Anthropic's internal red team, a group of 15 employees tasked with ensuring models can't harm humanity, quickly recognized the national security implications. Chief Science Officer Jared Kaplan and co-founder Sam McCandlish concluded in early March that Mythos posed too much risk for general release but should be made available to select organizations as a defensive cybersecurity tool . The company released Mythos through Project Glasswing to a limited group including Amazon Web Services, Apple, and JPMorgan Chase . Anthropic stated that "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities"
4
. The model has already identified thousands of severe vulnerabilities in every major operating system and web browser, some undetected for decades4
.Related Stories
Jamie Dimon addressed the paradox facing banks during JPMorgan's earnings call, explaining that while AI tools could eventually strengthen defenses, they're currently creating more problems. "AI's made it worse, it's made it harder," Dimon said, adding that "it does create additional vulnerabilities"
5
. When asked about Mythos, he acknowledged it "shows a lot more vulnerabilities need to be fixed"5
. In his annual letter published this week, Dimon wrote that cyber security "remains one of our biggest risks" and that "AI will almost surely make this risk worse"4
. The meeting with bank executives aimed to ensure financial institutions understand AI's impact on financial system security and are taking steps to defend their systems3
.
Source: HuffPost
The urgent warnings from White House officials about Mythos as a hacking tool highlight how AI has become a decisive force in the cybersecurity threat landscape . Prior to external release, Anthropic briefed senior US government officials on Mythos's full offensive and defensive cyber capabilities and is holding ongoing discussions with international governments . U.K. financial regulators are also discussing the risk posed by Mythos
1
. The situation grows more complex as Anthropic currently battles the Trump administration in court over the Department of Defense's designation of Anthropic as a supply-chain risk following failed negotiations about limiting government use of its AI models1
. JPMorgan dedicates significant resources to staying ahead of threats, with Dimon noting the bank has "top experts" in "constant contact with the government"5
. Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are reportedly testing Mythos despite JPMorgan being the only bank initially listed as a partner1
.Summarized by
Navi
[1]
07 Apr 2026ā¢Technology

20 Apr 2026ā¢Policy and Regulation

14 Apr 2026ā¢Technology

1
Entertainment and Society

2
Policy and Regulation

3
Technology
