4 Sources
4 Sources
[1]
New technology is increasing the speed and depth of cyber attacks
Financial services companies are stepping up efforts to strengthen their defences against the growing threat of cyber crime, in an attempt to safeguard their clients, customers and themselves from costly hacks and reputational damage. Banks such as JPMorgan Chase, Lloyds Banking Group and Santander are taking measures to keep their systems safe amid an increasing number of cyber attacks, as threat actors use new technology to increase the speed and breadth of their attacks. "More is changing now and faster than we have seen in a long time, the time to find and exploit vulnerabilities is drastically decreasing," says Patrick Opet, chief information security officer at JPMorgan. According to IBM's X-Force 2025 Threat Intelligence Index, the finance and insurance sectors accounted for 27 per cent of all incidents in 2025, the second highest share of all industries. The status of financial institutions as the backbone of modern economies has made them an obvious target for attackers. In particular, their large financial reserves and wealth of customer data make them an attractive option for those seeking to commit cyber crime. "It is the ability to get paid in a ransomware scenario [which motivates hackers]," says Katherine Kearns, head of proactive cyber services at cyber security consultancy S-RM, adding that this has made financial services companies inviting targets. For this reason, banks have often been at the forefront of cyber security, being early adopters to technology such as multi-factor authentication and increased supply chain protections. However, experts have now warned that the rapid adoption of AI has given cyber criminals a new avenue to commit attacks. According to research commissioned by financial and risk advisory company Kroll, 76 per cent of organisations have experienced a security incident involving AI applications or models in the past two years. Further, the use of AI has made it easier for threat actors to commit attacks and improved efforts to socially engineer victims such as through fake phone calls or "deepfake" videos. "I received an email pretending to be from a big US bank and it looked absolutely perfect, it was well written, it had a LinkedIn profile and the only thing that got me suspicious was the email address," says Nick Calver, vice-president for the financial services industry at cyber security company Palo Alto Networks and former cyber executive at Lloyds Bank and HSBC. "It's making threats more real and it's only going to get more threatening," he adds. It is the ability to get paid in a ransomware scenario [which motivates hackers] AI has also enabled hackers to better pick their points of entry, which has often led cyber criminals to look to supply chains as the "soft underbelly" of their real target. The rapidly changing landscape has forced financial institutions and their cyber advisers into improving their defences in order to combat the changing threat landscape. Thomas Harvey, chief information security officer at Santander UK, told the FT's Cyber Resilience Summit in December that to mitigate that risk the company regularly audited its partners. "We have a lot of cyber security clauses which we stipulate within, we go through various different cyber security assessments in terms of onboarding and we have monitoring tools which are monitoring the external posture of our supply chain in case there are changes," he said. "We said to suppliers that if you don't change your approach, we will stop buying," JPMorgan's Opet says, highlighting a 2025 letter sent to suppliers. "We started probing suppliers on how they defend themselves . . . And so essentially we collect information on all of our suppliers, to identify weaknesses in their infrastructure or signs of pre-compromise," he adds. Meanwhile, Lloyds has developed its Global Correlation Engine, an AI tool which helps identify threats and reduce false positives -- activity that is misidentified as being malicious. "Most financial institutions are re-evaluating what 'good' looks like," says Brent Tomlinson, president of risk advisory at Kroll. "[Most cyber incidents are] predominantly an identity issue; social engineering, phishing, etc, so more robust compliance training and programmes, more internal guardrails [are being implemented]," he adds. The escalating situation has also prompted warnings from cyber experts that companies should train their employees to be aware of such threats. "If someone rings you up for a password reset, don't just take this at face value, make them come into the office," says Toby Lewis, head of threat analysis at Darktrace, a cyber security group. Regulators such as the Bank of England and the Financial Conduct Authority are advising those in the sector to focus on resilience so that systems can be restored quickly in the event of an attack, according to a person familiar with their thinking. The changes have also seen some turn to former hackers in order to get inside the heads of those attacking them from the people who know them best. One of the organisations helping to link the two is The Hacking Games, a group formed in 2023 to help neurodivergent and unconventional cyber talent fall into the hands of the "good guys" rather than criminal gangs. "The good guys are not very good at recruiting and we wanted to change that," says Oliver Roskill, co-founder of the group. The Hacking Games now has partnerships with organisations including the Co-op, the UK retail group which was hit by a damaging cyber attack last year. The collaboration sees groups go into schools to give career talks and assess interested students to see if they could have a calling in cyber security. "Traditional hiring looks at intelligence, we look at personality and cognitive ability," Roskill adds. "We do careers talks, and when we get to 16, we will . . . test their aptitudes for a career in cyber." One of those working with The Hacking Games is Conor Freeman, a former hacker who served more than two years in prison after being charged with his crimes. Upon his release, Freeman enrolled in a master's degree in cyber security at University College Dublin, but struggled to find work because of the stigma associated with his criminal record. "In the last three years of my life, I had many interviews but as soon as my background came up, I was rejected . . . as soon as they raised it with shareholders, they said not a chance." Then he was introduced to The Hacking Games by a mutual acquaintance, and now works on the other half of The Hacking Games' business model: "offensive security services." "We've had a lot of people looking for that unconventional point . . . people want a hacker to hack their company like a real hacker would do," Freeman says. "I thought the only thing I'm good at was hacking, but had I been exposed to other routes at a younger age I may have gone in a different direction," he adds.
[2]
Banks are seeking to use AI as a tool for both protection and competition
HSBC's appointment of its first head of AI is perhaps evidence of how seriously global banks are taking the technology. Although Europe's largest lender is streamlining its business, promising shareholders that it will reduce headcount and cut $1.5bn of costs by the end of this year, it carved out the money for its inaugural chief AI officer, David Rice, to start work this month. "Our customers increasingly expect their bank to deliver services uniquely aligned to their specific needs, and fast," said Georges Elhedery, HSBC's chief executive, when the appointment was announced. "We're building a bank that is designed for the future and AI plays a key role in how we get there." But, while banks are pinning their hopes on the possible efficiency gains the technology might bring, as well as untapped revenue streams, the increasing use of AI by criminals is also causing a significant problem for lenders. Banks in the UK, for example, are dealing with a sharp rise in customer accounts being used to facilitate fraud, according to data from industry body Cifas. It found that AI scams pushed reports of fraud up to a record 444,000 last year, with criminals increasingly exploiting the technology to take over people's mobile, banking and online shopping accounts. The research also found that identity fraud in banking rose 10 per cent year on year to 63,678 cases as AI-powered impersonation and synthetic media make identity fraud and account takeover more difficult to spot. Mike Haley, CEO of Cifas, says: "Our data and intelligence show how fraud is being industrialised, with AI accelerating crime that is increasingly digital, organised and international. "Fraud must be treated as a national enforcement priority. Closing the gap requires decisive action, robust disruption of criminal networks, and greater sharing of cross‑sector data and intelligence to stop fraud at the source." AI has created a growing "innovation asymmetry", says Shanker Ramamurthy, global managing partner, banking and financial markets at IBM, who points out that banks are having to operate within frameworks of regulation and ethics, while criminals are leveraging AI unencumbered. "[This is] accelerating the threat landscape at an exponential rate," he says. "The challenge is not just that criminals are using new tools. It is that they are using AI to exploit existing vulnerabilities with extreme precision. We are seeing a shift towards a cognitive form of copying where automated attacks perfectly replicate legitimate customer behaviour, making them nearly invisible to legacy systems." Michael Down, global head of financial services at technology firm Neo4j, agrees. Fraud must be treated as a national enforcement priority. Closing the gap requires decisive action He believes AI has fundamentally altered the landscape for criminals by making them appear much more credible, especially with generative AI, which now allows fraudsters to sound and talk exactly like a specific customer. "Conventional security systems already struggle to see the hidden patterns connecting different actors because they treat every interaction as a standalone event, but adding that level of sophistication makes criminal networks even harder to uncover," says Down. He has seen this frequently play out with the creation of fake loan or mortgage services in which fraudsters set up "digital fronts" to harvest sensitive data, collect upfront application fees and then use that stolen information to secure legitimate financing before vanishing with the funds. "On the surface, they're separate, unrelated requests from different people, but in reality, they're deliberate disguises created by criminals. These requests look and sound so convincing that you have to identify device patterns and account behaviours across an entire network to root out these actors," says Down. The feeling is banks are caught in a never-ending game of catch-up, with increasing visibility into people's lives, partly as a result of social media, giving criminals access to countless data points they can use to spoof people's voices and images, imitate behavioural patterns and bypass security checks. Robert Gerstmann, co-founder of communications platform Sinch, says: "AI is making fraud both more sophisticated and more scalable, which makes it harder for banks to keep pace. Criminals can now generate convincing messages, voices and identities at speed, lowering the barrier to entry and increasing the volume of attacks. This creates a dual challenge where speed and adaptability are critical, resulting in an ongoing arms race." So what can lenders do about it? Experts suggest that to try and stay one step ahead of criminals, banks have to embed real-time risk controls into the technology they are using, with the aim of turning AI from a complex risk consideration into an advantage for both protection and competition. "To close the gap, security leaders at banks must pivot from reactive defence to predictive intelligence," says IBM's Ramamurthy. "The goal is not just to catch fraud faster, it is to build a resilient ecosystem that anticipates the threat and disrupts attack paths before malicious transactions can be initiated."
[3]
The Mythos meeting focused on the wrong AI risk to banks. Here's the one nobody is talking about | Fortune
When Treasury Secretary Scott Bessent and Federal Reserve Chair Jay Powell convened the chief executives of leading U.S. banks earlier this month to discuss Anthropic's latest model, Mythos, they signaled a shift in how artificial intelligence is being understood in finance. This was not a meeting about innovation but a warning: that models capable of identifying and exploiting vulnerabilities could pose a material risk to core financial infrastructure. That concern is justified. But the focus remains too narrow. In recent years, in discussions with leading financial institutions, I have seen how quickly concern rises once the adversarial uses of AI are understood. Yet the translation into action remains slow and uneven. Much of the current attention is focused on cyber risk. This is a serious threat. But it is not the only one and not the most immediate. Alongside the risks highlighted by Mythos, a parallel threat is already unfolding at scale. It does not depend on new frontier models, but on AI capabilities that are already widely available. And unlike cyber attacks, which require access to systems, this threat operates by targeting people. Artificial intelligence has made fraud dramatically cheaper, easier to execute and far more scalable. What once required time and coordination can now be automated and deployed at industrial scale. AI systems can generate thousands of convincing messages, voices and videos in seconds, each tailored to a specific individual. This is not incremental. It is structural. Fraud has shifted from a manual activity to a machine-driven one. Hyper-personalised social engineering campaigns, often powered by AI agents, now operate across multiple channels, jurisdictions and identities. They impersonate executives, advisers or family members with increasing credibility, creating urgency and inducing authorised transfers. In these scenarios, the system is not breached. It is bypassed. Customers are not necessarily hacked. They are convinced. And because transactions are authorised, existing safeguards are often ineffective. Biometric checks can be defeated by deepfakes. Rule-based monitoring is calibrated to detect human fraudsters, not coordinated networks of AI agents operating at machine speed. This creates a fundamentally different type of risk. Unlike cyber attacks, which tend to be episodic and visible, AI-enabled fraud operates as a continuous and distributed leakage of funds across millions of transactions. It is a creeping threat: easier to execute, faster to scale, and often invisible until losses become material. The trajectory points toward trillions of dollars in losses in the coming years. If the public comes to believe that financial institutions cannot protect customers from manipulation and fraud, trust in the system will erode. The consequences will extend beyond losses. Friction will rise, customers will hesitate, and confidence in banks' ability to safeguard money may weaken in ways no less damaging than cyber threats. This is not a greater threat than cyber risk. It is a parallel one. And it deserves similar attention. Most institutions still rely on fragmented data, legacy monitoring and human-led analysis that cannot keep pace with adaptive, AI-driven threats. A meaningful response requires architectural redesign: real-time, AI-native detection; integration of fraud, AML and behavioural signals; and the ability to intervene at the point of transaction, including in authorised payments. It also requires moving from isolated to coordinated defence. Fraud campaigns target customers across institutions simultaneously, while controls remain siloed. Effective response depends on identifying patterns and campaigns in real time. Privacy and competition considerations remain important, but they can no longer justify structural blind spots. Privacy-preserving technologies offer a path forward, enabling institutions to share signals without exposing sensitive data. In parallel, institutions need to adopt a "Defence AI" approach: using AI to defend against AI-driven threats. Human-only first lines of defence cannot scale. AI-native systems must support faster detection and response under human oversight. The lesson from the Mythos moment is not only that AI can break systems. It is that the financial system is already being exploited in another way that is less visible, more scalable and potentially just as corrosive. If the financial system does not respond quickly, the consequences will be severe: rising losses, rising friction, and a significant erosion of public trust. Regulators should be convening senior financial leaders on this issue, too, as a parallel AI risk, before a catastrophe that is already within reach of bad actors fully materialises. The financial system, the technology sector and policymakers must now recognise the scale of this vulnerability and act with far greater urgency.
[4]
Banks Up Defenses as AI Drives 76% of Cyberattacks | PYMNTS.com
"More is changing now and faster than we have seen in a long time, the time to find and exploit vulnerabilities is drastically decreasing," Patrick Opet, chief information security officer at JPMorgan, told the FT. The report cites the IBM X-Force 2025 Threat Intelligence Index, which showed that the finance and insurance industries made up 27% of cyberattacks last year, the second largest share among all sectors. As the FT notes, the role of financial institutions at the core of modern economies has made them an obvious target for cybercriminals who hope to take advantage of both their financial reserves and large stores of customer data. "It is the ability to get paid in a ransomware scenario [which motivates hackers]," said Katherine Kearns, head of proactive cyber services at cyber security consultancy S-RM, adding that this has turned financial services firms into attractive targets. It's why banks are often early adopters of new cybersecurity technology such as multi-factor authentication and increased supply chain safeguards, the FT added. Still, experts have begun to caution that rapid AI adoption has provided cybercriminals with new methods of attack. The FT cites research commissioned by financial and risk advisory company Kroll that show 76% of companies have experienced a security incident involving AI applications or models in the last two years. And as covered here last week, the newest models from artificial intelligence giants like OpenAI and Anthropic could mark a critical inflection point in the cybersecurity space. "AI is no longer just a tool in the hands of an attacker; it is beginning to replicate aspects of the attacker itself," that report said. For both finance chiefs and information security executives, the implication is increasingly stark, the report continued, with cyber risk shifting from a targeted phenomenon to something more akin to ambient exposure. "Organizations are not just selected; they are continuously scanned, probed and tested by systems operating at scale," PYMNTS added. "The median enterprise, the one with uneven patching, over-permissioned accounts, and inconsistent configuration management, is now more accessible to multistep intrusion attempts that can be executed, or at least orchestrated, by AI systems."
Share
Share
Copy Link
Financial institutions are ramping up cybersecurity defenses as AI accelerates cyber attacks, with 76% experiencing AI-related security incidents in the past two years. The finance sector now accounts for 27% of all cyberattacks, making it the second-most targeted industry as threat actors exploit AI to execute faster, more sophisticated fraud campaigns.
Financial institutions are confronting a rapidly evolving threat landscape as AI transforms the speed and sophistication of cyber attacks. According to research commissioned by Kroll, 76% of organizations have experienced a security incident involving AI applications or models in the past two years
4
. The finance and insurance sectors accounted for 27% of all incidents in 2025, making them the second-highest targeted industry, according to IBM's X-Force 2025 Threat Intelligence Index1
.
Source: PYMNTS
Banks including JPMorgan Chase, Lloyds Banking Group, and Santander are taking urgent measures to strengthen their cybersecurity defenses against this increase in cyber attacks. "More is changing now and faster than we have seen in a long time, the time to find and exploit vulnerabilities is drastically decreasing," says Patrick Opet, chief information security officer at JPMorgan Chase
1
. The AI risk to banks extends beyond traditional cyber threats, creating what experts call an "innovation asymmetry" where threat actors leverage AI unencumbered by regulatory constraints while financial institutions must operate within strict compliance frameworks2
.While recent discussions between Treasury Secretary Scott Bessent, Federal Reserve Chair Jay Powell, and bank CEOs focused on Anthropic's Mythos model and its ability to identify system vulnerabilities, experts warn of a more immediate threat already unfolding at scale
3
. AI-powered fraud operates by targeting people rather than systems, using generative AI to create convincing deepfakes and social engineering campaigns that bypass traditional security measures.
Source: FT
Data from UK industry body Cifas reveals that AI scams pushed fraud reports to a record 444,000 last year, with identity fraud in banking rising 10% year-on-year to 63,678 cases as AI-powered impersonation and synthetic media make detection increasingly difficult
2
. "AI is making fraud both more sophisticated and more scalable, which makes it harder for banks to keep pace," says Robert Gerstmann, co-founder of communications platform Sinch2
. Criminals can now generate thousands of convincing messages, voices, and videos in seconds, each tailored to specific individuals, transforming fraud from a manual activity into a machine-driven operation.To combat the evolving threat landscape, banks are implementing multi-layered approaches that address both cyber resilience and AI-native detection systems. Lloyds has developed its Global Correlation Engine, an AI tool that helps identify threats and reduce false positives—activity misidentified as malicious
1
. This Defence AI approach recognizes that human-only defenses cannot scale against automated attacks operating at machine speed.Supply chain security has become a critical focus as cyber criminals increasingly view partner networks as the "soft underbelly" of their real targets. Thomas Harvey, chief information security officer at Santander UK, explained that the company regularly audits partners with extensive cyber security clauses and monitoring tools tracking external posture changes
1
. JPMorgan Chase has taken an even more direct stance, with Opet noting they sent a 2025 letter to suppliers stating: "We said to suppliers that if you don't change your approach, we will stop buying"1
.Related Stories
The status of financial institutions as the backbone of modern economies, combined with their large financial reserves and wealth of customer data, makes them attractive targets for ransomware attacks. "It is the ability to get paid in a ransomware scenario [which motivates hackers]," says Katherine Kearns, head of proactive cyber services at S-RM
1
. The threat extends beyond immediate financial losses to potential erosion of public trust, as customers increasingly question whether institutions can protect them from sophisticated manipulation and fraud.Nick Calver, vice-president for the financial services industry at Palo Alto Networks and former cyber executive at Lloyds Bank and HSBC, describes receiving a phishing email that "looked absolutely perfect, it was well written, it had a LinkedIn profile and the only thing that got me suspicious was the email address"
1
. Such incidents highlight how AI enables threat actors to execute hyper-personalized attacks that traditional rule-based monitoring systems struggle to detect.The financial sector faces what experts describe as an arms race, requiring continuous adaptation as AI capabilities advance. HSBC's appointment of its first head of AI, David Rice, signals how seriously global banks are treating the technology as both an opportunity and a threat
2
. "Our customers increasingly expect their bank to deliver services uniquely aligned to their specific needs, and fast," said Georges Elhedery, HSBC's chief executive, emphasizing AI's role in future banking operations.Meanwhile, regulators including the Bank of England and Financial Conduct Authority are advising sector participants to focus on resilience, ensuring systems can be restored quickly following attacks
1
. Brent Tomlinson, president of risk advisory at Kroll, notes that "most financial institutions are re-evaluating what 'good' looks like," implementing more robust compliance training programs and internal guardrails to address predominantly identity-based issues like social engineering and phishing1
. As cyber risk shifts from targeted attacks to ambient exposure, organizations face continuous scanning and probing by AI systems operating at unprecedented scale.Summarized by
Navi
07 Apr 2026•Technology

17 Apr 2025•Business and Economy
19 Feb 2026•Technology

1
Policy and Regulation

2
Technology

3
Technology
