Curated by THEOUTPOST
On Thu, 17 Apr, 12:05 AM UTC
2 Sources
[1]
AI-driven synthetic fraud a growing threat to UK financial institutions: By Paul Weathersby
We recently shared some insights highlighting a critical issue for the financial services industry: as UK consumers increasingly rely on digital platforms, synthetic fraud is on the rise. Our latest data has shown a staggering 60% increase in synthetic identity fraud cases in 2024 compared to the previous year, with these cases now constituting nearly a third (29%) of all identity fraud. What this underscores is the evolving tactics of fraudsters, who are leveraging advanced technologies like generative AI to create convincing fake identities. As these fraudulent activities become more sophisticated, financial institutions are having to find ways to better safeguard themselves and their consumers against new threats. To effectively combat this escalating issue, financial institutions must prioritise two key strategies: deploying cutting-edge technologies and fostering collaborative efforts. By embracing innovative solutions and working together, they can enhance their defences and ensure robust protection against the ever-changing landscape of fraud. Understanding synthetic fraud Historically, creating new identities to apply for financial products involved combining an individual's sensitive information, such as national insurance numbers or dates of birth, with either different identities or fake personally identifiable information. This process was time-consuming, but with generative AI, synthetic fraud can take place in a matter of minutes. Some criminals go as far as to fabricate entire social media accounts to make their fake identities feel more legitimate. Detecting synthetic fraud is considerably more challenging than traditional identity fraud. Because synthetic identities are not linked to real individuals, there is no person monitoring the credit file who might raise the alarm. As a result, fraudulent accounts or lines of credit can go unnoticed for extended periods. Unlike identity theft, where the real person might notice and report unfamiliar accounts, synthetic fraud lacks this layer of detection, making it harder to spot. Generative AI also aids fraudsters in altering voices and producing convincing fake identity documents to bypass security screenings. It is believed that the number of fake passports generated through AI could now exceed digitally altered physical documents for the first time. The role of artificial intelligence Fortunately, AI solutions are at the forefront of the solving the problem too. These advanced systems can analyse vast amounts of data in real-time, identifying patterns and anomalies that may indicate fraudulent activity. The three most prevalent use cases are: Ultimately, the integration of AI and other advanced technologies has had a significant impact on fraud prevention in the UK. According to UK Finance, financial services companies prevented £710 million of unauthorised fraud in the first half of the year. This success is largely due to the sophisticated fraud-prevention technologies now in place. The role of data sharing Data sharing also plays a crucial role in preventing synthetic? fraud by fostering collaboration and information exchange among industry players. It enables banks, insurance companies, and other financial firms to share data on fraudulent activities, suspicious transactions, and emerging threats in real-time, creating a robust mechanism against fraud. One of the main benefits of data sharing is the ability to identify and mitigate fraud patterns more effectively. By pooling data from multiple sources, financial institutions can detect abnormalities and patterns that may indicate fraudulent activity. This allows for quicker identification of fraud schemes that might go unnoticed if companies were operating in isolation. Moreover, data sharing enhances the speed and accuracy of fraud detection. When a suspicious transaction is flagged by one company, the information can be rapidly shared, alerting other members to potential threats. Staying ahead What is interesting is that additional research by Experian, which surveyed more than 500 financial service companies, found that only a quarter (25%) feel confident in addressing the threat posed by synthetic identity fraud. Additionally, just 23% feel equipped to deal successfully with AI and deepfake fraud. This highlights the critical need for businesses to take action now. While the fight against fraud is an ongoing battle and criminals continue to develop new methods, the key for financial institutions is to remain vigilant and proactive in updating their strategies against preventing financial crime. By leveraging the latest AI and data-sharing technologies, and fostering industry collaboration, they can stay ahead of emerging threats and safeguard their customers.
[2]
5 Myths About Fraud Prevention for Financial Services Firms | PYMNTS.com
AI systems that calculate fraud probabilities rather than binary "yes-no" decisions catch more fraud and minimize false positives. Fraud is getting more sophisticated, thanks to artificial intelligence (AI). Fraud can be perpetrated in the form of deepfake videos or voice, with AI producing a clone of a family relative supposedly in an emergency and needs a cash transfer immediately. AI can write more convincing phishing emails, removing telltale signs such as broken English. AI can also fake images like a driver's license to fool and scam people, according to an FBI report. "Fraud is only going to get worse with the creation of generative AI," said Mike de Vere, CEO of Zest AI, which leverages AI to help financial services firms make more informed lending decisions and mitigate fraud incidents. According to a March 2025 report from the U.S. Federal Trade Commission (FTC), the amount of losses due to fraud hit $12.5 billion in 2024, up 25% from the prior year. More people also reported they lost money due to fraud: 38% last year compared to 27% in 2023. Investment scams led people to lose the most money, totaling $5.7 billion, up 24% from the year before. The second highest were imposter scams, at $2.95 billion. However, imposter scams were the most commonly reported fraud, with online shopping fraud next. Notably, consumers lost more money to scams through bank transfers or cryptocurrency than all other payment methods combined, the FTC said. According to a PYMNTS Intelligence study in partnership with i2c, 28% of consumers have fallen victim to credit card fraud last year. Moreover, 37% said they were "very" or "extremely" worried about falling victim to such fraud, according to "Consumer Credit Economy: Credit Card Fraud." In an interview with PYMNTS, de Vere said fraud losses are projected to reach $40 billion by 2027. Fraud tools are becoming more accessible, he added, noting that for as little as $20, criminals can do things like create fake IDs and pay stubs. Read more: 37% of Consumers Highly Concerned About Credit Card Fraud Based on his experience working with banks and credit unions, de Vere shared his insights on five myths about fraud prevention that could leave organizations vulnerable. The first misconception is that fraudsters only target major financial institutions. In reality, 8 out of 10 banks and credit unions, including smaller ones, reported fraud losses exceeding $500,000 last year. "It disproportionately impacts smaller financial institutions," de Vere said. "A fraudster going up against Citi's IT team is probably going to be less successful than [targeting] a tiny credit union that outsources their IT." Many institutions believe that monitoring individual transactions provides adequate fraud prevention protection. For example, looking at a customer's credit card patterns to spot whether there's a fraudulent purchase. However, de Vere said this narrow approach misses the broader behavioral patterns that AI can detect. He shared this real-world example: A fraudster opened a credit card at a credit union, charging about $100 a month and paying it off regularly. By itself, this behavior doesn't raise red flags. However, this criminal was doing the same thing at several credit unions, de Vere said. The individual eventually applied for and received personal loans, maxed out the credit cards and disappeared with the money. The third myth revolves around the idea that to be secure, a financial institution has to put the customer through several hoops such as asking for the answer to a security question and the like. It creates friction in the customer experience. These binary fraud systems -- is it a fraud or not? Yes or no -- can create problems unnecessarily, de Vere said. He shared his personal experience of being flagged for ID fraud during an auto loan application simply because his last name was squished together. "An AI solution could have looked at my credit report and seen that ... two of my credit cards actually have my last name smashed together, so it's probably not likely that I'm a fraudster." Humans are supposed to be the gold standard when it comes to catching fraud, but de Vere argued that they are only as good as their experiences. Moreover, manual reviews are limited by the reviewer's experience within an institution. In contrast, an AI model can consume trillions of points of data to identify patterns of fraud. "It's so far beyond where a human can be," de Vere said. The final myth is that fraud prevention solutions are interchangeable. De Vere said that many available solutions are incomplete, creating blind spots in security coverage. He said a robust fraud prevention solution should offer probability scores rather than binary "fraud/no-fraud" decisions, be trained on comprehensive datasets and tailored to an organization's needs and geographic location. This approach lets organizations identify local fraud rings and deploy appropriate security measures. Advocatong for a collaborative approach to fighting fraud, de Vere said, "We need to be thinking less about it being a competitive issue and more about it being a collaborative issue." To that end, Zest AI has created a consortium to share fraud experiences, enabling AI models to learn from attacks on one institution to protect others in the same ecosystem.
Share
Share
Copy Link
The rise of AI-powered synthetic fraud is posing significant challenges to financial institutions, with a 60% increase in cases reported in 2024. This article explores the nature of this threat, its impact, and strategies to combat it.
In 2024, the financial services industry witnessed a staggering 60% increase in synthetic identity fraud cases compared to the previous year, now accounting for nearly a third of all identity fraud 1. This surge highlights the evolving tactics of fraudsters who are leveraging advanced technologies like generative AI to create convincing fake identities. The U.S. Federal Trade Commission reported that fraud losses hit $12.5 billion in 2024, a 25% increase from the prior year 2.
Synthetic fraud involves creating new identities by combining an individual's sensitive information with either different identities or fake personally identifiable information. Generative AI has dramatically accelerated this process, enabling fraudsters to create fake identities in minutes. Some criminals even fabricate entire social media accounts to make their fake identities appear more legitimate 1.
Detecting synthetic fraud is particularly challenging because these identities are not linked to real individuals, meaning there's no person monitoring the credit file who might raise an alarm. This lack of a real-world counterpart allows fraudulent accounts or lines of credit to go unnoticed for extended periods 1.
While AI is being used to perpetrate fraud, it's also at the forefront of solving the problem. Advanced AI systems can analyze vast amounts of data in real-time, identifying patterns and anomalies that may indicate fraudulent activity. These systems have proven effective, with UK Finance reporting that financial services companies prevented £710 million of unauthorized fraud in the first half of the year 1.
AI-powered fraud prevention solutions offer several advantages:
Data sharing plays a crucial role in preventing synthetic fraud. By fostering collaboration and information exchange among industry players, financial institutions can create a robust mechanism against fraud. This approach enables quicker identification of fraud schemes that might go unnoticed if companies were operating in isolation 1.
Some organizations are taking this collaboration further. For instance, Zest AI has created a consortium to share fraud experiences, enabling AI models to learn from attacks on one institution to protect others in the same ecosystem 2.
Despite the growing threat, many financial institutions are underprepared. A survey by Experian found that only 25% of financial service companies feel confident in addressing the threat posed by synthetic identity fraud, and just 23% feel equipped to deal successfully with AI and deepfake fraud 1.
There are also several misconceptions about fraud prevention that could leave organizations vulnerable:
As the threat of AI-driven synthetic fraud continues to evolve, financial institutions must remain vigilant and proactive in updating their strategies. By leveraging the latest AI and data-sharing technologies and fostering industry collaboration, they can stay ahead of emerging threats and safeguard their customers.
Reference
[1]
As AI-powered scams become more sophisticated, financial institutions are turning to AI to combat fraud and money laundering. This technological arms race is reshaping the landscape of financial crime prevention.
2 Sources
2 Sources
Artificial Intelligence is reshaping the banking and financial services sector, offering new opportunities for growth and efficiency while also presenting emerging risks. This story explores the impact of AI in ASEAN markets and beyond, highlighting both the potential benefits and challenges.
2 Sources
2 Sources
The U.S. Treasury Department has successfully implemented an AI-driven fraud detection system, recovering and preventing over $4 billion in fraudulent or improper payments in 2024, marking a significant increase from previous years.
10 Sources
10 Sources
Mastercard enhances its Consumer Fraud Risk technology with AI capabilities to protect consumers from authorized push payment scams in real-time payments. The expansion aims to address the growing concern of financial fraud in the UK and globally.
4 Sources
4 Sources
AI technology is revolutionizing the banking industry and financial oversight. From enhancing customer experiences to improving risk management, AI is reshaping how financial institutions operate and are regulated.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved