2 Sources
2 Sources
[1]
AI is supercharging a global cyber fraud crisis--but it could also solve it
AI's growing use in cyberattacks poses a significant threat to online security, with sophisticated scams impacting businesses and individuals globally. While AI lowers the cost of deception, advanced AI-enabled detection and coordinated international action are crucial to combat rising cyber-enabled fraud. Ask most people to name AI's defining moment and they will likely point to ChatGPT's public launch in November 2022. Yet, a different development involving the tech could have a bigger impact on people's everyday lives--the use of AI to carry out cyberattacks. AI is now widely seen as the biggest threat to online security in the year ahead. Reports of hackers bypassing guardrails to launch cyberattacks on major companies cannot be ignored, prompting governments to brace for a surge in the scale and severity of AI-enabled cyberattacks. In a country like India-- which in 2023 accounted for around 49% of global real-time payment transactions worldwide, and processed around 181 bn digital transactions with a total value exceeding ₹233 tn in 2024--stakes could hardly be higher. Cyber-enabled fraud is already widespread: 73% of respondents to WEF's latest Global Cybersecurity Outlook say they, or someone in their network, were personally affected in 2025. Many CEOs now rank it as the top cybersecurity threat--overtaking ransomware--with 77% of respondents reporting an increase in incidents over the past year. This trend is particularly pronounced in South Asia, where 85% of organisations believe risks of cyber-enabled fraud and phishing attacks have increased. In India alone, more than ₹52,976 cr has been lost to cyber fraud and cheating cases over the past six years, including almost ₹19,813 cr in 2025. AI models are capable of creating synthetic voices that are being used in phishing attacks via phone. One British engineering company was scammed out of $25 mn when this tactic was deployed on a video call. If bosses of some of the world's most sophisticated and resilient enterprises are falling victim to these crimes, how worried should smaller businesses be about issues such as email fraud, fake invoices and identity theft? What about the risks for everyday citizens, particularly vulnerable groups such as the elderly? Fraud has become the connective tissue of cyber risk, affecting households, corporations, and national economies simultaneously. One scam email can lead to data breaches that cause a breakdown in a company's operations, setting off a chain reaction that can ripple through supply chains and across borders, denting not just bottom lines, but trust in digital and international systems. AI's potential to automate cybercrime may only be matched by its capacity to prevent it. ML algorithms can detect fraud, for example, in banking, but they address the act, not the intent, especially as AI has lowered the cost of deception while increasing its credibility. Recent headline-grabbing cases illustrate the stakes. A deepfake video showed an Irish presidential candidate falsely announcing her withdrawal from the election campaign, while Indonesian citizens were scammed by an Instagram video appearing to show the country's president directing people to a WhatsApp number to receive aid. Yet, fraud doesn't only strike through high-profile incidents. Smaller-scale scams occur constantly, and their prevalence is rising across all economies. WEF's research shows that 79% people in North America have been impacted, or know someone who has. AI is enabling rapid, tailored content creation, allowing criminals to scale and personalise scams with unprecedented efficiency. This should serve as both a warning and a wake-up call, highlighting the need for equally advanced AI-enabled detection, authentication, and monitoring tools. Today's cyber defences are not keeping pace with the accelerating speed and sophistication of cyberattacks. But that does not have to remain the case. The Global Cybersecurity Outlook identifies three main obstacles to better cyber defences: fragmented regulation across borders, insufficient intelligence sharing, and a lack of cybersecurity capacity among small and medium-sized enterprises, with 46% reporting critical skills shortages. Initiatives such as the upcoming UN and Intrpol's Global Fraud Summit in Vienna in March sign a shift towards more coordinated international action to prevent cybercrime. Similarly, the AI Impact Summit in New Delhi is being held to envision a future in which AI advances humanity, fosters inclusive growth, and safeguards our shared planet. Protecting individuals also requires action at the human, infra and tech levels--from digital safety education and stronger identity verification and domain oversight to AI-enabled screening that flags fraud before harm occurs. Isolated action will not be enough. As fraud becomes systemic, the response must be systemic too. This will require coordinated collaboration on a global scale, bringing governments, industry leaders and civil society together to act across borders rather than just within them. Only in this way can they strengthen their collective capacity to prevent, protect against, and mitigate cyber-enabled fraud across the digital ecosystem. The author is Managing director, Centre for Frontier Technologies and Innovation and Centre for Cybersecurity, World Economic Forum
[2]
AI must be an infrastructure to tackle real-time financial fraud
"Fraud is no longer episodic or manual. It is real-time, adaptive, networked, and increasingly cross-border in nature", Suresh Sethi, Managing Director and CEO at Protean eGov Technologies, said at a session on "AI for Fraud Prevention and Financial Inclusion in BFSI" at the AI Impact Summit on February 16, 2026. He added that "traditional rule-based controls designed for slower and smaller systems are being stretched to their limits" as India's digital financial ecosystem scales rapidly. Furthermore, Sethi emphasised that the scale of India's digital rails has fundamentally altered risk dynamics. "Onboarding happens in seconds, not days. While this scale is a source of enormous strength, it also fundamentally changes the nature of the risk," he said, noting that transaction volumes now run into billions monthly. Moreover, he argued that the industry must rethink how fraud prevention is built into financial systems. "AI must be reframed, not as a tool, but as an infrastructure," he said, explaining that the real shift is from after-the-fact detection to what he described as "in-flight intelligence" across onboarding, transactions, and networks. Meanwhile, Neeraj Aggarwal, Managing Director and Senior Partner at Boston Consulting Group (BCG), said the conversation reflects a broader transition in digital finance. He noted that while digital onboarding and payments have matured, "trust is a place where AI can play a role in establishing the infrastructure by giving real-time risk intelligence and fraud prevention," particularly as fraud becomes more sophisticated alongside rising transaction velocity. Notably, Sethi repeatedly stressed that static rules are increasingly inadequate in high-velocity digital ecosystems. "Rule-based mechanisms, while they are important to set a threshold, are just not good enough to work at scale in the way the transaction velocity is going," he said. In other words, binary signals such as transaction limits or value thresholds cannot sufficiently capture evolving behavioural fraud patterns in real time. Furthermore, highlighting the speed of financial fraud, Sethi argues, "Fraud happens in milliseconds, and you would always be caught post-facto in trying to prevent it," thereby underscoring the need for systems that are both reactive and adaptive. He consequently outlined three key capabilities AI brings to fraud prevention: speed and reactiveness, networked intelligence, and reduction in false positives. In addition, on networked intelligence between financial bodies, he pointed to mule account chains where funds are quickly dispersed. "Money is moved from the victim's account into one or more accounts in milliseconds and then branches out into multiple accounts, including cross-border accounts," he said, noting that a single institution may only see a seemingly legitimate transaction without broader behavioural context. Therefore, he added that AI enables institutions to analyse behavioural signals across multiple accounts and institutions, helping them identify "anomalies in those behaviours and catch it at source", while also reducing false positives that can otherwise disrupt legitimate transactions and livelihoods. At the same time, Manish Agarwal, Business Head at Kotak811, Kotak Mahindra Bank, said the use of AI in finance must remain tightly governed due to regulatory and customer accountability requirements. "If AI has to be used in financial services, which is highly regulated, it has to be explainable, and the governance mindset has to be very much into the people who are building it," he said. To that end, he introduced what he described as a "JCT" framework: justifiability, contestability, and traceability. "Any decision which has been taken, whether the credit has been denied or a transaction has been stopped, how do we justify that?" he said, adding that affected users should be able to contest automated decisions and institutions must be able to trace model reasoning and potential biases. Moreover, Agarwal also emphasised the continued importance of human oversight. "AI can continue to make decisions, but is there a human who's looking at the decision being done?" he said, citing video KYC as an example where automated checks operate alongside human verification under regulatory expectations. Similarly, Saurabh Mittal, Country Head - Strategy, Transformation, Analytics and Research at DBS Bank India, said responsible data usage and governance structures are central to scaling AI safely. "Every fast car needs a brake for us to have a sense of safety. Similarly, any AI development will require guardrails to be in place," he said, while noting that historical data biases can unintentionally exclude sections of society if not carefully managed. Beyond institutional deployment, the panel also discussed AI as a shared infrastructure for fraud prevention and financial inclusion. In this regard, Sethi said current defences remain fragmented while fraud networks are increasingly interconnected. "When fraud is happening, our defences are fragmented," he said, thereby arguing for cross-border intelligence embedded into AI infrastructure. Accordingly, he suggested that shared, anonymised data signals could improve predictability in risk analytics while protecting user privacy. "The data which is being shared has to be anonymised. These are not customer data we are sharing, but anonymised signals and behavioural patterns," he said, adding that such approaches could help break supply chains of fraud across institutions and geographies. From an investment perspective, Srijay Ghosh, Managing Director at Temasek, linked trusted AI infrastructure directly to capital allocation and inclusion outcomes. "If you have trusted data, you have trusted models, and if you have trusted models, you have trusted capital allocation. If you have trusted capital allocation, then you have trusted inclusion," he said. However, Sethi noted that institutions must ensure that enhanced controls do not come "at the cost of exclusion or friction, especially for first-time users and MSMEs", while also maintaining assisted digital models for users with lower digital literacy.
Share
Share
Copy Link
AI has supercharged cyber fraud globally, with India losing over ₹52,976 crore to cybercrime in six years. But experts at the AI Impact Summit argue that the same technology, when deployed as infrastructure rather than a tool, could provide real-time fraud prevention. The challenge lies in building AI-enabled detection systems that match the speed and sophistication of attacks.
AI has emerged as both the catalyst and potential cure for a mounting global cyber fraud crisis that threatens businesses, governments, and individuals alike. According to the World Economic Forum's latest Global Cybersecurity Outlook, 73% of respondents report that they or someone in their network were personally affected by cyber-enabled fraud in 2025
1
. The scale of the problem is particularly acute in India, where more than ₹52,976 crore has been lost to cyber fraud and cheating cases over the past six years, including almost ₹19,813 crore in 2025 alone1
.
Source: MediaNama
The threat landscape has fundamentally shifted as AI enables criminals to scale and personalize cyberattacks with unprecedented efficiency. Sophisticated phishing attacks now deploy synthetic voices and deepfakes to deceive even seasoned executives. A British engineering company was scammed out of $25 million when fraudsters used AI-generated voices on a video call
1
. Deepfake videos have shown political candidates falsely withdrawing from elections and government officials directing citizens to fraudulent aid schemes. In South Asia, 85% of organizations believe risks of phishing attacks have increased1
.Suresh Sethi, Managing Director and CEO at Protean eGov Technologies, emphasized at the AI Impact Summit that "fraud is no longer episodic or manual. It is real-time, adaptive, networked, and increasingly cross-border in nature"
2
. India's digital financial ecosystem now processes billions of transactions monthly, with the country accounting for around 49% of global real-time payment transactions worldwide and processing around 181 billion digital transactions with a total value exceeding ₹233 trillion in 20241
.Sethi warned that "fraud happens in milliseconds, and you would always be caught post-facto in trying to prevent it"
2
. He described how mule accounts enable rapid fund dispersal: "Money is moved from the victim's account into one or more accounts in milliseconds and then branches out into multiple accounts, including cross-border accounts"2
. Traditional rule-based controls designed for slower systems cannot keep pace with this velocity, creating a dangerous gap in defenses against identity theft, data breaches, and ransomware attacks.Experts argue that the solution lies in fundamentally reframing how AI is deployed in fraud prevention. "AI must be reframed, not as a tool, but as an infrastructure," Sethi stated, explaining that the shift must move from after-the-fact detection to "in-flight intelligence" across onboarding, transactions, and networks
2
. He outlined three key capabilities AI brings: speed and reactiveness, networked intelligence, and reduction in false positives2
.Neeraj Aggarwal, Managing Director and Senior Partner at Boston Consulting Group, noted that while digital onboarding and payments have matured, "trust is a place where AI can play a role in establishing the infrastructure by giving real-time risk intelligence and fraud prevention"
2
. AI-enabled detection systems can analyze behavioral signals across multiple accounts and institutions, identifying anomalies that single institutions might miss while reducing disruptions to legitimate transactions.Related Stories
As AI becomes central to fraud prevention, experts stress that deployment must include robust AI governance frameworks. Manish Agarwal, Business Head at Kotak811, introduced a "JCT" framework encompassing justifiability, contestability, and traceability. "If AI has to be used in financial services, which is highly regulated, it has to be explainable, and the governance mindset has to be very much into the people who are building it," he said
2
.Human oversight remains essential even as AI systems become more sophisticated. Agarwal emphasized: "AI can continue to make decisions, but is there a human who's looking at the decision being done?"
2
. Saurabh Mittal, Country Head at DBS Bank India, added that "every fast car needs a brake for us to have a sense of safety. Similarly, any AI development will require guardrails to be in place"2
.The World Economic Forum's Global Cybersecurity Outlook identifies three main obstacles to better defenses: fragmented regulation across borders, insufficient intelligence sharing, and a lack of cybersecurity capacity among small and medium-sized enterprises, with 46% reporting critical skills shortages
1
. Initiatives such as the upcoming UN and Interpol's Global Fraud Summit in Vienna signal a shift towards more coordinated international action. Protecting individuals requires action at multiple levels—from digital safety education and stronger authentication to AI-enabled screening that flags fraud before harm occurs. As fraud becomes systemic across supply chains and borders, the response must be equally systemic, bringing together governments and industry in coordinated collaboration.Summarized by
Navi
1
Policy and Regulation

2
Technology

3
Technology
