3 Sources
3 Sources
[1]
AI-driven fraud far more profitable, Interpol warns
Interpol says fraud schemes using the tech are 4.5x more profitable AI is apparently good for the bottom line if your business is crime. Financial fraud schemes carried out with the help of artificial intelligence are 4.5 times more profitable than those that aren't enhanced, according to Interpol's latest estimates. The crime agency said that AI "greatly boosts both efficiency and effectiveness," making each interaction with a fraudster more convincing and all the more likely to continue growing in popularity. Cybercriminals most commonly use generative AI tools to eliminate the small details that may have otherwise given them away. Using AI to rephrase text messages or emails to victims can help iron out the quirks that may betray a non-native speaker. It could mean the difference between success and failure when impersonating major brands, for example. On the more advanced end of the scale, deepfake technology is far more sophisticated now than it was even two years ago. Interpol said that criminals can create convincing voice clones with just ten seconds of reference material, such as audio ripped from a social media post. Dark web marketplaces offer full-service synthetic identity kits, commonly referred to as deepfake-as-a-service products, which make it even easier for criminals to trick victims into thinking they're speaking to a known individual. These kits come at affordable prices, Interpol said in its annual financial fraud report, and have accelerated the industrialization of this type of cybercrime. "Over the past two years, technology has continued to enable and enhance financial fraud, empowering criminal networks to scale operations exponentially with minimal investment," said Interpol. "Digital technology and AI, in particular, have dramatically transformed social engineering techniques and victim profiling, enabling fraudsters to construct highly persuasive fraud environments. "The proliferation of AI-driven tools, large language models, cryptocurrencies, and the rapid expansion of the fraud-as-a-service platforms have collectively lowered barriers to entry, enabling widespread access to sophisticated fraud capabilities, elevating the generation of financial gain through fraud schemes to an efficient, global industry." Agentic AI is the trendy autonomous new kid on the AI block, and while the benefits it offers for both sides of the cybercrime divide are well told, it still isn't being used at scale yet. However, when that day comes, Interpol is concerned about the capabilities it will hand attackers. Deploying agents will take much of the legwork away from a fraudster, who can simply prompt a bot to return all the pertinent information about an individual, including their credentials or a business's system vulnerabilities that could be exploited for ransomware attacks. In the latter example, the agent could also scour stolen data and advise the crook about how to price their ransom demands, based on the value of the data the bot stole and the victim's financial position. Whether the agentic AI security threat reaches that reality remains to be seen. Some cyber experts, like Kevin Mandia, have staked millions on the notion that the tech will usher in the next big trend in cybercrime, while others are exercising a little more restraint. Member countries have also informed Interpol of a rise in sextortion schemes that rely on AI-generated imagery to blackmail victims into paying the criminals. Some cases have seen targets reject the scammers' initial attempts to carry out traditional financial fraud schemes, such as crypto, forex, romance scams, and more, but are then subjected to AI-assisted sextortion campaigns. This development is closely linked to the rapid expansion of scam centers across the world. Originating in Southeast Asia (SEA) around four years ago, these facilities often see humans trafficked into online scam work from other countries. Crime-fighting organizations have worked to shut these down, but in recent years the number of centers has expanded, as has their geographic footprint. Scam centers of this kind are being seen increasingly in regions beyond SEA, including Central and South America, North Africa, and some parts of Europe. Police reports suggest that humans are being trafficked to these compounds under false pretenses, regardless of the region. Interpol believes that this scamming phenomenon now involves hundreds of thousands of individuals globally, many of whom are thought to be victims of human trafficking. The growth of scam centers appears to be outpacing the ability of international police to shut them down. Interpol routinely publicizes the successful operations it coordinates with regional police forces, often yielding a large number of arrests each time. The results of one eight-week operation announced in February revealed that 651 arrests were made following probes across 16 countries in Africa, with over 1,200 victims identified. A further 574 arrests were announced earlier in December, and 260 in the September before that, all of which took place in Africa, illustrating the spread of the crime outside of SEA. In 2025 alone, the global losses associated with financial fraud were around $442 billion, Interpol reckons, and this is only expected to rise over the next three to five years, largely because of AI. Valdecy Urquiza, secretary general at Interpol, said: "Enabled by artificial intelligence, low-cost digital tools and increased global criminal collaboration, we are witnessing the industrialization of fraud. "It is vital to remember that the cost of financial crime is not just money - it is people's life savings, their dignity, and in the worst case, their life. "Strengthening cooperation between law enforcement, the private sector, and raising public awareness is key in tackling this global security threat." ®
[2]
Cybercriminals using AI for fraud are making far more profit, Interpol claims
Polishing phishing emails and cloning voices are just some of the ways crooks use AI * Interpol says GenAI-powered fraud 4.5x more profitable * AI boosts phishing, deepfakes, and social engineering campaigns * Agentic AI could enable autonomous end-to-end fraud in future Cybercriminals and fraudsters using Generative Artificial Intelligence (GenAI) are 4.5 times more profitable than those not using it, Interpol is saying. In a new research paper, titled "Global financial fraud threat assessment", the international law enforcement agency said AI, "greatly boosts both efficiency and effectiveness" of scam campaigns, suggesting that its popularity in the criminal underbelly is only going to grow. There are numerous ways in which crooks can use GenAI, but the most obvious one seems to be - polishing phishing content. Before the emergence of AI, the best way to spot a phishing email was to simply proofread it, since the fraudsters were usually non-native speakers, and the messages were riddled with mistakes that made it obvious they didn't come from legitimate brands. Abusing Intune With AI to polish and rephrase the content, proofreading is no longer a viable option, and phishing emails became more successful and impactful. But that's just the "gateway drug" to AI-powered fraud. High-level crooks are using AI for deepfakes, creating hyper-convincing voice clones from almost no source material. To make matters even worse, the dark web is full of widely available kits (deepfake-as-a-service) that further lower the barrier for entry and make kicking off an impersonation campaign just a matter of dollars. "Over the past two years, technology has continued to enable and enhance financial fraud, empowering criminal networks to scale operations exponentially with minimal investment," said Interpol. "Digital technology and AI, in particular, have dramatically transformed social engineering techniques and victim profiling, enabling fraudsters to construct highly persuasive fraud environments. Interpol also discussed Agentic AI - systems that "can autonomously plan and execute complete fraud campaigns, from reconnaissance, to ransom demands." For crooks it sounds promising, but it has not yet reached the level of mass-use as GenAI. Whether or not that happens remains to be seen. After all, the promise of agentic AI has yet to fully materialize in the legal world, too. Via The Register Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[3]
Global financial fraud hits USD 442 billion in 2025, AI scams surge: Interpol
In 2025, global financial fraud skyrocketed to a staggering USD 442 billion, with Interpol's latest findings highlighting a disturbing trend: cybercriminals are leveraging sophisticated AI technologies. With fraud now yielding 4.5 times greater returns, deepfake audio has emerged as a common tool for greenlighting fraudulent wire transfers. New Delhi: Over USD 442 billion has been siphoned off from the global economy in 2025 through financial fraud, Interpol said in its global financial fraud threat assessment for 2026, released on Monday. The report kept the overall global risk related to financial fraud at "high". Scamsters are leveraging agentic artificial intelligence (AI), which can "autonomously plan and execute complete fraud campaigns -- from reconnaissance to ransom demands", the Lyon-based global police cooperation body said. Also Read: Banks write off loans worth Rs 9.75 lakh cr in last 11 years The second edition of the report, released Monday, warned that AI-enhanced fraud is 4.5 times more profitable than traditional methods, as agentic AI systems are now capable of autonomously planning and executing complete fraud campaigns -- from reconnaissance to ransom demands, making them a force multiplier. From harvesting victim credentials to generating psychologically tailored ransom notes, fraudsters have used "deepfake audio" to mimic the voices of corporate executives during real-time phone calls to authorise fraudulent wire transfers in the Asia-Pacific region, it stated. Titled 'INTERPOL Global Financial Fraud Threat Assessment', the report presents a grim anatomy of a "global fraud crisis", describing the frauds as a "polycriminal milieu" where traditional crimes like drug trafficking now cross paths with highly sophisticated, tech-enabled scams orchestrated through the Internet. Also Read: Top 10 wilful defaulters owe Rs 40,635 crore to banks, ABG Shipyard leads The report warned that "fraud-as-a-service" platforms and generative AI have demolished "barriers to entry", allowing even low-skill individuals to execute hyper-realistic campaigns. The spread of cyber slavery and scam compounds, which were once prevalent in Southeast Asian countries, is being detected in the Middle East, Africa, and Latin America, aiding the expansion of global financial fraud syndicates, according to the assessment. The labyrinthine compounds house hundreds of thousands of people from nearly 80 nationalities, trafficked and forced to perpetrate online scams in the name of lucrative jobs, it added. Global law enforcement agencies are also trying to collaborate more effectively, the report noted. "Since 2024, the number of fraud-related Interpol Notices and Diffusions has increased by 54 per cent. Over the same period, Interpol supported member countries in more than 1,500 transnational fraud cases in lost assets valued at USD 1.1 billion," it said. The report further said the scam centres are growing in number and scale, targeting ever more victims. "Although these operations are regularly shut down, the criminal leaders behind them remain hard to identify, using intermediaries and shell companies to hide their tracks and avoid detection," it said. Interpol said it is closing this critical gap by launching 'Operation Shadow Storm', a new international task force funded by the United Kingdom's Home Office as part of a unified, data-driven response. "Using Interpol's network and tools such as I-GRIP, a stop-payment mechanism, the task force will target not only the financial frauds generated by scam centres, but also the links to cybercrime and human trafficking for forced criminality," it said. (You can now subscribe to our Economic Times WhatsApp channel)
Share
Share
Copy Link
Interpol's latest global financial fraud threat assessment reveals that AI has transformed cybercrime into a highly profitable industry. Financial fraud schemes using AI are 4.5 times more profitable than traditional methods, with global losses reaching $442 billion in 2025. Criminals deploy deepfake technology, voice cloning, and fraud-as-a-service platforms to execute sophisticated scams at unprecedented scale.
Financial fraud has escalated to an alarming $442 billion globally in 2025, according to Interpol's latest global financial fraud threat assessment released in March 2026
3
. The international law enforcement agency revealed that AI-enhanced fraud schemes are 4.5 times more profitable than traditional methods, marking a significant shift in how cybercriminals operate1
2
. The report maintained the overall global risk related to financial fraud at "high," signaling that this trend shows no signs of slowing.
Source: ET
Cybercriminals using AI have discovered that the technology "greatly boosts both efficiency and effectiveness," making each fraudulent interaction more convincing and dramatically increasing success rates
1
. This profitability surge suggests that AI-powered scams will continue growing in popularity across criminal networks worldwide.Generative AI tools have eliminated the telltale signs that previously helped victims identify fraud attempts. Criminals now use AI to rephrase phishing emails and text messages, ironing out linguistic quirks that might betray non-native speakers
1
. This capability means the difference between success and failure when impersonating major brands, as proofreading is no longer a viable defense against these polished messages2
.Deepfake technology has advanced dramatically over the past two years. Interpol noted that criminals can now create convincing voice cloning with just ten seconds of reference material, such as audio extracted from social media posts
1
. In the Asia-Pacific region, fraudsters have deployed deepfake audio to mimic corporate executives during real-time phone calls, successfully authorizing fraudulent wire transfers3
.
Source: TechRadar
Dark web marketplaces now offer full-service synthetic identity kits, commonly known as deepfake-as-a-service products, at affordable prices
1
. These platforms have accelerated the industrialization of cybercrime by providing low-skill individuals access to sophisticated fraud capabilities, effectively creating lowered barriers to entry for aspiring criminals3
.While not yet deployed at scale, agentic AI represents a concerning evolution in fraud capabilities. These systems can "autonomously plan and execute complete fraud campaigns—from reconnaissance to ransom demands," functioning as a force multiplier for criminal operations
3
. An agentic AI agent could potentially return all pertinent information about an individual, including credentials, or identify business system vulnerabilities for ransomware attacks1
.In ransomware scenarios, these agents could analyze stolen data and advise criminals on pricing ransom demands based on the value of compromised information and the victim's financial position
1
. Whether this threat fully materializes remains uncertain, as the promise of agentic AI has yet to fully materialize even in legitimate applications2
. Some cyber experts like Kevin Mandia have invested heavily in preparing for this scenario, while others exercise more restraint.Related Stories
Interpol reported a disturbing rise in sextortion schemes using AI-generated imagery to blackmail victims into payment
1
. Some cases involve targets who initially rejected traditional fraud attempts—as such as crypto, forex, or romance scams—but were then subjected to AI-assisted sextortion campaigns.This development connects directly to the rapid expansion of human-trafficking-driven scam centers across the world. Originating in Southeast Asia approximately four years ago, these facilities often house victims trafficked under false pretenses and forced into perpetrating online scams
1
. The phenomenon now involves hundreds of thousands of individuals from nearly 80 nationalities globally, many confirmed as victims of human trafficking and cyber slavery3
.These scam centers have expanded beyond Southeast Asia into Central and South America, North Africa, the Middle East, and parts of Europe
1
3
. The growth appears to be outpacing law enforcement's ability to shut them down, despite regular operations yielding significant arrests. One eight-week operation announced in February resulted in 651 arrests across 16 African countries, with over 1,200 victims identified1
.
Source: The Register
Interpol noted that fraud-related notices and diffusions increased by 54 percent since 2024, while the organization supported member countries in more than 1,500 transnational fraud cases involving lost assets valued at $1.1 billion
3
. However, criminal leaders behind scam centers remain difficult to identify, using intermediaries and shell companies to evade detection.To address this critical gap, Interpol launched Operation Shadow Storm, a new international task force funded by the United Kingdom's Home Office
3
. Using Interpol's network and tools such as I-GRIP, a stop-payment mechanism, the task force will target not only financial fraud generated by scam centers but also the links to cybercrime and human trafficking for forced criminality. This unified, data-driven response represents a recognition that traditional social engineering has evolved into a sophisticated, tech-enabled industry requiring coordinated international action and enhanced victim profiling capabilities to combat effectively.Summarized by
Navi
[1]
28 Mar 2026•Technology

16 Apr 2025•Technology

10 Feb 2026•Technology

1
Technology

2
Technology

3
Science and Research
