6 Sources
[1]
Why the world's banks are so worried about Anthropic's latest AI model
The legendary American bank robber Willie Sutton spent 40 years robbing banks because, as he claimed in his autobiography, he loved doing it. And when asked why he chose banks of all places to rob, he allegedly replied "Because that's where the money is." Back in 2017, I wrote a book predicting it wasn't just lovable rogues like Sutton who would soon be robbing banks, but artificial intelligence (AI). That day, it appears, could now be about to arrive. Banks around the world are seriously worried cyber criminals will soon take advantage of the latest advances in AI to try to rob them. The digital back door into the vault The finance world's concern rests on the impressive cyber capabilities of a product called "Mythos". This is the latest and most capable AI model from Anthropic, the company behind the popular Claude chatbot. As a member of the public, you can't access or use this model - for now. That's because Anthropic (and many others) believe Mythos is too capable to launch upon an unsuspecting world. Internal testing of Mythos has uncovered thousands of severe security vulnerabilities across every major operating system and web browser. Some of these vulnerabilities have gone undetected for decades. Many are what tech insiders call "zero day" vulnerabilities - attacks that are so dangerous that developers need to fix them in zero days' time. Not for public use To counter this emerging threat, Anthropic has made the model available to a dozen partners of a defensive coalition that includes Microsoft, Amazon Web Services, Apple, Cisco and the Linux Foundation. The company has also committed US$100 million (about A$140 million) in usage credits and US$4 million (about A$5.6 million) in open-source grants to start finding and fixing these bugs. In addition, more than 40 additional organisations - including a number of US banks - have also received access. But worryingly, as far as we know, Anthropic has not yet granted access to any banks in Australia, the United Kingdom or Europe. To add to concerns, on Wednesday, Anthropic confirmed it was investigating claims in a Bloomberg report that a small group of unauthorised users had gained access to Mythos. However, at this stage, there is no suggestion this alleged access was for malicious purposes. Should you be worried? Last week, regulators and policymakers from around the world gathered at the International Monetary Fund spring meeting in Washington. The Iran war was a major focus. But attendees also issued a series of warnings about this emerging cybersecurity threat to the banking industry. Not only are banks an attractive target, being where the money is, but the industry runs on many legacy systems, decades old technology that may be especially vulnerable to these sorts of attacks. You personally don't need to be too worried. Many countries have strong protections for bank customers. In Australia, for example, the first A$250,000 of a customer's deposits are insured through the government-backed Financial Claims Scheme. And the Australian Securities and Investments Commission ensures banks investigate and reimburse fraudulent transactions where the customer is not at fault. So, it's probably not a wise idea to withdraw your cash and put it under the mattress. But banks should be (and are) rushing to plug these vulnerabilities. I would recommend you regularly update your computer and smartphone to have the latest operating system and banking apps. There are likely to be many more updates in the near future as new vulnerabilities are uncovered and patched. And, as I'm sure you have been, you need to be ever vigilant for phishing attacks by email and SMS trying to obtain your banking credentials. The evolving threat landscape In the longer term, Mythos exposes the challenge that defence is much harder than attack. Software is one of the most complex products humanity builds. It is therefore almost impossible to ensure it is bug-free. That puts us in an unending race against the "bad guys" to uncover and fix faults before they get exploited. For example, with significant fanfare, the European Union just released its age verification app, designed to be a cornerstone to the emerging laws on access to social media, pornography and other age-restricted content. However, within hours, security experts found cyber vulnerabilities that underage users could easily exploit. In the most critical settings, we can try to prove mathematically that our software is bug-free. For instance, the Beneficial AI Foundation just announced an ambitious "moonshot" project to prove that the popular messaging app Signal is bug-free and protects privacy as claimed. But such efforts are the exception today rather than the norm. Perhaps further advances in AI could soon help reverse this.
[2]
Finma Says Immediate Mythos Access Would Pose Systemic Bank Risk
Regulators and banks are taking steps to prepare for potential threats from Mythos, with Finma stating that it takes the rapid development of AI very seriously and is coordinating with international authorities. Switzerland's top financial regulator said giving banks quick and easy access to Anthropic PBC's artificial intelligence tool Mythos would create a severe risk for the country's financial system. "The uncontrolled and immediate availability of AI models such as Mythos would be classified as a systemic risk," a spokesperson for Finma said in response to questions from Bloomberg News. "In such a scenario, virtually all existing software systems could simultaneously be affected by a multitude of previously unknown zero-day vulnerabilities, which would be exploited immediately and via AI." Anthropic has said that Mythos is too powerful to release to the general public, which it has described as so good at finding vulnerabilities in software that it will only be released to a limited number of carefully chosen parties. If tools like Mythos fall into the wrong hands, it could provide attackers with a powerful new weapon to steal data or disrupt critical infrastructure, the firm has said. Regulators, central bankers and corporate executives have been seeking to gain more insight on the technology. There are concerns that financial systems outside the US including Europe are at a disadvantage because they have limited access. The European Central Bank was planning to convene a call later in the week to discuss the potential threats from Mythos with the chief risk officers of eurozone lenders, Bloomberg News reported on Thursday last week. "We must prevent the misuse of this technology," Bundesbank President Joachim Nagel said on Tuesday. "At the same time, all relevant institutions should have access to such technology to avoid competitive distortions." Commerzbank AG is "examining the Mythos model very closely and assessing the associated risks," a spokesperson said in a statement. "To this end, we are also in close contact with other banks, technology partners and regulatory authorities." Well Prepared Germany's banks are well prepared for heightened cyber risks emanating from the new technology, Deutsche Bank AG Chief Executive Officer Christian Sewing said on Monday. Finma added in its statement that it "takes the rapid development of AI very seriously" and it's "in contact with the Federal Office for Cybersecurity, banks, and critical service providers" while also "coordinating with international authorities." Banks "must actively incorporate the evolving threat landscape into their risk management," it said. "Cyber attacks are becoming faster, more precise, and easier to carry out with the help of AI. A strong awareness of the risks at all levels, continuous communication with service providers, and a clearly risk-oriented approach are essential."
[3]
New technology is increasing the speed and depth of cyber attacks
Financial services companies are stepping up efforts to strengthen their defences against the growing threat of cyber crime, in an attempt to safeguard their clients, customers and themselves from costly hacks and reputational damage. Banks such as JPMorgan Chase, Lloyds Banking Group and Santander are taking measures to keep their systems safe amid an increasing number of cyber attacks, as threat actors use new technology to increase the speed and breadth of their attacks. "More is changing now and faster than we have seen in a long time, the time to find and exploit vulnerabilities is drastically decreasing," says Patrick Opet, chief information security officer at JPMorgan. According to IBM's X-Force 2025 Threat Intelligence Index, the finance and insurance sectors accounted for 27 per cent of all incidents in 2025, the second highest share of all industries. The status of financial institutions as the backbone of modern economies has made them an obvious target for attackers. In particular, their large financial reserves and wealth of customer data make them an attractive option for those seeking to commit cyber crime. "It is the ability to get paid in a ransomware scenario [which motivates hackers]," says Katherine Kearns, head of proactive cyber services at cyber security consultancy S-RM, adding that this has made financial services companies inviting targets. For this reason, banks have often been at the forefront of cyber security, being early adopters to technology such as multi-factor authentication and increased supply chain protections. However, experts have now warned that the rapid adoption of AI has given cyber criminals a new avenue to commit attacks. According to research commissioned by financial and risk advisory company Kroll, 76 per cent of organisations have experienced a security incident involving AI applications or models in the past two years. Further, the use of AI has made it easier for threat actors to commit attacks and improved efforts to socially engineer victims such as through fake phone calls or "deepfake" videos. "I received an email pretending to be from a big US bank and it looked absolutely perfect, it was well written, it had a LinkedIn profile and the only thing that got me suspicious was the email address," says Nick Calver, vice-president for the financial services industry at cyber security company Palo Alto Networks and former cyber executive at Lloyds Bank and HSBC. "It's making threats more real and it's only going to get more threatening," he adds. It is the ability to get paid in a ransomware scenario [which motivates hackers] AI has also enabled hackers to better pick their points of entry, which has often led cyber criminals to look to supply chains as the "soft underbelly" of their real target. The rapidly changing landscape has forced financial institutions and their cyber advisers into improving their defences in order to combat the changing threat landscape. Thomas Harvey, chief information security officer at Santander UK, told the FT's Cyber Resilience Summit in December that to mitigate that risk the company regularly audited its partners. "We have a lot of cyber security clauses which we stipulate within, we go through various different cyber security assessments in terms of onboarding and we have monitoring tools which are monitoring the external posture of our supply chain in case there are changes," he said. "We said to suppliers that if you don't change your approach, we will stop buying," JPMorgan's Opet says, highlighting a 2025 letter sent to suppliers. "We started probing suppliers on how they defend themselves . . . And so essentially we collect information on all of our suppliers, to identify weaknesses in their infrastructure or signs of pre-compromise," he adds. Meanwhile, Lloyds has developed its Global Correlation Engine, an AI tool which helps identify threats and reduce false positives -- activity that is misidentified as being malicious. "Most financial institutions are re-evaluating what 'good' looks like," says Brent Tomlinson, president of risk advisory at Kroll. "[Most cyber incidents are] predominantly an identity issue; social engineering, phishing, etc, so more robust compliance training and programmes, more internal guardrails [are being implemented]," he adds. The escalating situation has also prompted warnings from cyber experts that companies should train their employees to be aware of such threats. "If someone rings you up for a password reset, don't just take this at face value, make them come into the office," says Toby Lewis, head of threat analysis at Darktrace, a cyber security group. Regulators such as the Bank of England and the Financial Conduct Authority are advising those in the sector to focus on resilience so that systems can be restored quickly in the event of an attack, according to a person familiar with their thinking. The changes have also seen some turn to former hackers in order to get inside the heads of those attacking them from the people who know them best. One of the organisations helping to link the two is The Hacking Games, a group formed in 2023 to help neurodivergent and unconventional cyber talent fall into the hands of the "good guys" rather than criminal gangs. "The good guys are not very good at recruiting and we wanted to change that," says Oliver Roskill, co-founder of the group. The Hacking Games now has partnerships with organisations including the Co-op, the UK retail group which was hit by a damaging cyber attack last year. The collaboration sees groups go into schools to give career talks and assess interested students to see if they could have a calling in cyber security. "Traditional hiring looks at intelligence, we look at personality and cognitive ability," Roskill adds. "We do careers talks, and when we get to 16, we will . . . test their aptitudes for a career in cyber." One of those working with The Hacking Games is Conor Freeman, a former hacker who served more than two years in prison after being charged with his crimes. Upon his release, Freeman enrolled in a master's degree in cyber security at University College Dublin, but struggled to find work because of the stigma associated with his criminal record. "In the last three years of my life, I had many interviews but as soon as my background came up, I was rejected . . . as soon as they raised it with shareholders, they said not a chance." Then he was introduced to The Hacking Games by a mutual acquaintance, and now works on the other half of The Hacking Games' business model: "offensive security services." "We've had a lot of people looking for that unconventional point . . . people want a hacker to hack their company like a real hacker would do," Freeman says. "I thought the only thing I'm good at was hacking, but had I been exposed to other routes at a younger age I may have gone in a different direction," he adds.
[4]
Banks are seeking to use AI as a tool for both protection and competition
HSBC's appointment of its first head of AI is perhaps evidence of how seriously global banks are taking the technology. Although Europe's largest lender is streamlining its business, promising shareholders that it will reduce headcount and cut $1.5bn of costs by the end of this year, it carved out the money for its inaugural chief AI officer, David Rice, to start work this month. "Our customers increasingly expect their bank to deliver services uniquely aligned to their specific needs, and fast," said Georges Elhedery, HSBC's chief executive, when the appointment was announced. "We're building a bank that is designed for the future and AI plays a key role in how we get there." But, while banks are pinning their hopes on the possible efficiency gains the technology might bring, as well as untapped revenue streams, the increasing use of AI by criminals is also causing a significant problem for lenders. Banks in the UK, for example, are dealing with a sharp rise in customer accounts being used to facilitate fraud, according to data from industry body Cifas. It found that AI scams pushed reports of fraud up to a record 444,000 last year, with criminals increasingly exploiting the technology to take over people's mobile, banking and online shopping accounts. The research also found that identity fraud in banking rose 10 per cent year on year to 63,678 cases as AI-powered impersonation and synthetic media make identity fraud and account takeover more difficult to spot. Mike Haley, CEO of Cifas, says: "Our data and intelligence show how fraud is being industrialised, with AI accelerating crime that is increasingly digital, organised and international. "Fraud must be treated as a national enforcement priority. Closing the gap requires decisive action, robust disruption of criminal networks, and greater sharing of cross‑sector data and intelligence to stop fraud at the source." AI has created a growing "innovation asymmetry", says Shanker Ramamurthy, global managing partner, banking and financial markets at IBM, who points out that banks are having to operate within frameworks of regulation and ethics, while criminals are leveraging AI unencumbered. "[This is] accelerating the threat landscape at an exponential rate," he says. "The challenge is not just that criminals are using new tools. It is that they are using AI to exploit existing vulnerabilities with extreme precision. We are seeing a shift towards a cognitive form of copying where automated attacks perfectly replicate legitimate customer behaviour, making them nearly invisible to legacy systems." Michael Down, global head of financial services at technology firm Neo4j, agrees. Fraud must be treated as a national enforcement priority. Closing the gap requires decisive action He believes AI has fundamentally altered the landscape for criminals by making them appear much more credible, especially with generative AI, which now allows fraudsters to sound and talk exactly like a specific customer. "Conventional security systems already struggle to see the hidden patterns connecting different actors because they treat every interaction as a standalone event, but adding that level of sophistication makes criminal networks even harder to uncover," says Down. He has seen this frequently play out with the creation of fake loan or mortgage services in which fraudsters set up "digital fronts" to harvest sensitive data, collect upfront application fees and then use that stolen information to secure legitimate financing before vanishing with the funds. "On the surface, they're separate, unrelated requests from different people, but in reality, they're deliberate disguises created by criminals. These requests look and sound so convincing that you have to identify device patterns and account behaviours across an entire network to root out these actors," says Down. The feeling is banks are caught in a never-ending game of catch-up, with increasing visibility into people's lives, partly as a result of social media, giving criminals access to countless data points they can use to spoof people's voices and images, imitate behavioural patterns and bypass security checks. Robert Gerstmann, co-founder of communications platform Sinch, says: "AI is making fraud both more sophisticated and more scalable, which makes it harder for banks to keep pace. Criminals can now generate convincing messages, voices and identities at speed, lowering the barrier to entry and increasing the volume of attacks. This creates a dual challenge where speed and adaptability are critical, resulting in an ongoing arms race." So what can lenders do about it? Experts suggest that to try and stay one step ahead of criminals, banks have to embed real-time risk controls into the technology they are using, with the aim of turning AI from a complex risk consideration into an advantage for both protection and competition. "To close the gap, security leaders at banks must pivot from reactive defence to predictive intelligence," says IBM's Ramamurthy. "The goal is not just to catch fraud faster, it is to build a resilient ecosystem that anticipates the threat and disrupts attack paths before malicious transactions can be initiated."
[5]
The Mythos meeting focused on the wrong AI risk to banks. Here's the one nobody is talking about | Fortune
When Treasury Secretary Scott Bessent and Federal Reserve Chair Jay Powell convened the chief executives of leading U.S. banks earlier this month to discuss Anthropic's latest model, Mythos, they signaled a shift in how artificial intelligence is being understood in finance. This was not a meeting about innovation but a warning: that models capable of identifying and exploiting vulnerabilities could pose a material risk to core financial infrastructure. That concern is justified. But the focus remains too narrow. In recent years, in discussions with leading financial institutions, I have seen how quickly concern rises once the adversarial uses of AI are understood. Yet the translation into action remains slow and uneven. Much of the current attention is focused on cyber risk. This is a serious threat. But it is not the only one and not the most immediate. Alongside the risks highlighted by Mythos, a parallel threat is already unfolding at scale. It does not depend on new frontier models, but on AI capabilities that are already widely available. And unlike cyber attacks, which require access to systems, this threat operates by targeting people. Artificial intelligence has made fraud dramatically cheaper, easier to execute and far more scalable. What once required time and coordination can now be automated and deployed at industrial scale. AI systems can generate thousands of convincing messages, voices and videos in seconds, each tailored to a specific individual. This is not incremental. It is structural. Fraud has shifted from a manual activity to a machine-driven one. Hyper-personalised social engineering campaigns, often powered by AI agents, now operate across multiple channels, jurisdictions and identities. They impersonate executives, advisers or family members with increasing credibility, creating urgency and inducing authorised transfers. In these scenarios, the system is not breached. It is bypassed. Customers are not necessarily hacked. They are convinced. And because transactions are authorised, existing safeguards are often ineffective. Biometric checks can be defeated by deepfakes. Rule-based monitoring is calibrated to detect human fraudsters, not coordinated networks of AI agents operating at machine speed. This creates a fundamentally different type of risk. Unlike cyber attacks, which tend to be episodic and visible, AI-enabled fraud operates as a continuous and distributed leakage of funds across millions of transactions. It is a creeping threat: easier to execute, faster to scale, and often invisible until losses become material. The trajectory points toward trillions of dollars in losses in the coming years. If the public comes to believe that financial institutions cannot protect customers from manipulation and fraud, trust in the system will erode. The consequences will extend beyond losses. Friction will rise, customers will hesitate, and confidence in banks' ability to safeguard money may weaken in ways no less damaging than cyber threats. This is not a greater threat than cyber risk. It is a parallel one. And it deserves similar attention. Most institutions still rely on fragmented data, legacy monitoring and human-led analysis that cannot keep pace with adaptive, AI-driven threats. A meaningful response requires architectural redesign: real-time, AI-native detection; integration of fraud, AML and behavioural signals; and the ability to intervene at the point of transaction, including in authorised payments. It also requires moving from isolated to coordinated defence. Fraud campaigns target customers across institutions simultaneously, while controls remain siloed. Effective response depends on identifying patterns and campaigns in real time. Privacy and competition considerations remain important, but they can no longer justify structural blind spots. Privacy-preserving technologies offer a path forward, enabling institutions to share signals without exposing sensitive data. In parallel, institutions need to adopt a "Defence AI" approach: using AI to defend against AI-driven threats. Human-only first lines of defence cannot scale. AI-native systems must support faster detection and response under human oversight. The lesson from the Mythos moment is not only that AI can break systems. It is that the financial system is already being exploited in another way that is less visible, more scalable and potentially just as corrosive. If the financial system does not respond quickly, the consequences will be severe: rising losses, rising friction, and a significant erosion of public trust. Regulators should be convening senior financial leaders on this issue, too, as a parallel AI risk, before a catastrophe that is already within reach of bad actors fully materialises. The financial system, the technology sector and policymakers must now recognise the scale of this vulnerability and act with far greater urgency.
[6]
Banks Up Defenses as AI Drives 76% of Cyberattacks | PYMNTS.com
"More is changing now and faster than we have seen in a long time, the time to find and exploit vulnerabilities is drastically decreasing," Patrick Opet, chief information security officer at JPMorgan, told the FT. The report cites the IBM X-Force 2025 Threat Intelligence Index, which showed that the finance and insurance industries made up 27% of cyberattacks last year, the second largest share among all sectors. As the FT notes, the role of financial institutions at the core of modern economies has made them an obvious target for cybercriminals who hope to take advantage of both their financial reserves and large stores of customer data. "It is the ability to get paid in a ransomware scenario [which motivates hackers]," said Katherine Kearns, head of proactive cyber services at cyber security consultancy S-RM, adding that this has turned financial services firms into attractive targets. It's why banks are often early adopters of new cybersecurity technology such as multi-factor authentication and increased supply chain safeguards, the FT added. Still, experts have begun to caution that rapid AI adoption has provided cybercriminals with new methods of attack. The FT cites research commissioned by financial and risk advisory company Kroll that show 76% of companies have experienced a security incident involving AI applications or models in the last two years. And as covered here last week, the newest models from artificial intelligence giants like OpenAI and Anthropic could mark a critical inflection point in the cybersecurity space. "AI is no longer just a tool in the hands of an attacker; it is beginning to replicate aspects of the attacker itself," that report said. For both finance chiefs and information security executives, the implication is increasingly stark, the report continued, with cyber risk shifting from a targeted phenomenon to something more akin to ambient exposure. "Organizations are not just selected; they are continuously scanned, probed and tested by systems operating at scale," PYMNTS added. "The median enterprise, the one with uneven patching, over-permissioned accounts, and inconsistent configuration management, is now more accessible to multistep intrusion attempts that can be executed, or at least orchestrated, by AI systems."
Share
Copy Link
Anthropic's powerful Mythos AI model has uncovered thousands of severe security vulnerabilities across major operating systems, prompting urgent warnings from financial regulators worldwide. Switzerland's Finma labels unrestricted access a systemic bank risk, while experts warn that AI-powered fraud poses an equally dangerous parallel threat already operating at industrial scale.
The global banking sector faces a critical moment as Anthropic's latest AI model, Mythos, reveals thousands of severe security vulnerabilities that have remained hidden for decades
1
. Internal testing uncovered what cybersecurity experts call zero-day vulnerabilities—flaws so dangerous they require immediate fixes across every major operating system and web browser. The model's capabilities are so advanced that Anthropic has restricted public access, making it available only to a defensive coalition including Microsoft, Amazon Web Services, Apple, Cisco, and the Linux Foundation, along with more than 40 additional organizations including several US banks1
.
Source: PYMNTS
Switzerland's top financial regulator Finma issued a stark warning about the systemic bank risk posed by unrestricted access to AI models like Mythos. "The uncontrolled and immediate availability of AI models suchs as Mythos would be classified as a systemic risk," a Finma spokesperson stated, noting that "virtually all existing software systems could simultaneously be affected by a multitude of previously unknown zero-day vulnerabilities, which would be exploited immediately and via AI"
2
. This concern reflects the broader anxiety among financial regulators worldwide about the cybersecurity threat emerging from advanced AI capabilities.The European Central Bank convened emergency discussions with chief risk officers of eurozone lenders to assess potential threats from Mythos
2
. Bundesbank President Joachim Nagel emphasized the delicate balance required: "We must prevent the misuse of this technology. At the same time, all relevant institutions should have access to such technology to avoid competitive distortions"2
. Commerzbank AG confirmed it is examining the Mythos model closely while maintaining contact with other banks, technology partners, and regulatory authorities.Anthropic has committed $100 million in usage credits and $4 million in open-source grants to help find and fix the vulnerabilities discovered by Mythos
1
. However, concerns intensified when Bloomberg reported that a small group of unauthorized users may have gained access to the model, though Anthropic stated there was no evidence of malicious intent1
. The incident underscores the challenge of containing powerful AI tools in an interconnected digital ecosystem.Banks present particularly attractive targets for cyber attacks on financial services because the industry runs on legacy systems—decades-old technology that may be especially vulnerable to AI-exploited vulnerabilities
1
. According to IBM's X-Force 2025 Threat Intelligence Index, the finance and insurance sectors accounted for 27 per cent of all incidents in 2025, the second highest share across all industries3
. Patrick Opet, chief information security officer at JPMorgan Chase, noted that "more is changing now and faster than we have seen in a long time, the time to find and exploit vulnerabilities is drastically decreasing"3
.
Source: FT
The rapid adoption of AI in banking has created what IBM's Shanker Ramamurthy describes as "innovation asymmetry"—banks must operate within frameworks of regulation and ethics while criminals leverage AI unencumbered, "accelerating the threat landscape at an exponential rate"
4
. This asymmetry puts financial institutions in a cybersecurity arms race where defense proves far more difficult than attack, as software complexity makes bug-free systems nearly impossible to guarantee1
.While Mythos dominates regulatory attention, experts warn that AI-powered fraud represents an equally dangerous but more immediate threat already operating at industrial scale. Research commissioned by Kroll found that 76 per cent of organizations have experienced a security incident involving AI applications or models in the past two years
3
. UK industry body Cifas reported that AI scams pushed fraud reports to a record 444,000 last year, with identity fraud in banking rising 10 per cent year-on-year to 63,678 cases as deepfakes and AI-powered social engineering make fraud more difficult to detect4
.
Source: The Conversation
Unlike cyber attacks that breach systems, AI-powered fraud bypasses security by targeting people directly. "AI systems can generate thousands of convincing messages, voices and videos in seconds, each tailored to a specific individual," creating authorised transactions that existing safeguards struggle to prevent
5
. Nick Calver, vice-president for financial services at Palo Alto Networks, described receiving a perfectly crafted phishing email that only raised suspicion due to the sender's email address, noting "it's making threats more real and it's only going to get more threatening"3
.Related Stories
Major banks are implementing comprehensive strategies to address the AI-driven threat landscape. JPMorgan Chase sent letters to suppliers in 2025 warning that failure to improve cybersecurity practices would result in lost business, while probing suppliers on their defensive capabilities to identify weaknesses in infrastructure
3
. Lloyds Banking Group developed its Global Correlation Engine, an AI tool that helps identify threats and reduce false positives—activity misidentified as malicious3
. HSBC appointed its first chief AI officer, David Rice, despite cost-cutting measures that include reducing headcount and cutting $1.5 billion by year-end4
.Finma emphasized that banks "must actively incorporate the evolving threat landscape into their risk management," noting that "cyber attacks are becoming faster, more precise, and easier to carry out with the help of AI"
2
. Experts advocate for Defence AI approaches—using AI to defend against AI-driven threats—as human-only defenses cannot scale to match machine-speed attacks5
. This requires architectural redesign with real-time, AI-native detection and integration of fraud, anti-money laundering, and behavioral signals to enable intervention at the point of transaction.The Mythos situation exposes fundamental challenges in maintaining cyber resilience as AI capabilities advance. Treasury Secretary Scott Bessent and Federal Reserve Chair Jay Powell's recent meeting with bank CEOs signals that AI risk is now understood as a material threat to core financial infrastructure
5
. However, the focus on cyber vulnerabilities, while justified, may overlook the parallel threat of AI-powered social engineering that operates as "a continuous and distributed leakage of funds across millions of transactions"5
.Bank customers face minimal direct risk due to deposit insurance schemes and fraud reimbursement policies, but should maintain vigilance by regularly updating operating systems and banking apps as new vulnerabilities are discovered and patched
1
. The trajectory points toward potential trillions of dollars in losses if the financial system fails to respond with sufficient urgency to both the cybersecurity threat posed by models like Mythos and the escalating wave of AI-powered fraud5
. As one expert noted, if the public loses confidence that financial institutions can protect against manipulation and fraud, trust erosion may prove no less damaging than direct cyber attacks.Summarized by
Navi
[1]
06 May 2026•Policy and Regulation

15 Apr 2026•Technology

07 Apr 2026•Technology

1
Business and Economy

2
Technology

3
Technology
