Curated by THEOUTPOST
On Thu, 17 Apr, 12:04 AM UTC
7 Sources
[1]
Google suspended 39.2 million malicious advertisers in 2024 thanks to AI
Google may have finally found an application of large language models (LLMs) that even AI skeptics can get behind. The company just released its 2024 Ads Safety report, confirming that it used a collection of newly upgraded AI models to scan for bad ads. The result is a huge increase in suspended spammer and scammer accounts, with fewer malicious ads in front of your eyeballs. While stressing that it was not asleep at the switch in past years, Google reports that it deployed more than 50 enhanced LLMs to help enforce its ad policy in 2024. Some 97 percent of Google's advertising enforcement involved these AI models, which reportedly require even less data to make a determination. Therefore, it's feasible to tackle rapidly evolving scam tactics. Google says that its efforts in 2024 resulted in 39.2 million US ad accounts being suspended for fraudulent activities. That's over three times more than the number of suspended accounts in 2023 (12.7 million). The factors that trigger a suspension usually include ad network abuse, improper use of personalization data, false medical claims, trademark infringement, or a mix of violations. Despite these efforts, some bad ads still make it through. Google says it identified and removed 1.8 billion bad ads in the US and 5.1 billion globally. That's a small drop from 5.5 billion ads removed in 2023, but the implication is that Google had to remove fewer ads because it stopped the fraudulent accounts before they could spread. The company claims most of the 39.2 million suspended accounts were caught before they ran a single ad.
[2]
Google used AI to suspend over 39M ad accounts suspected of fraud | TechCrunch
Google on Wednesday said it suspended 39.2 million advertiser accounts on its platform in 2024 -- more than triple the number from the previous year -- in its latest crackdown on ad fraud. By leveraging large language models (LLMs) and using signals such as business impersonation and illegitimate payment details, the search giant said it could suspend a "vast majority" of ad accounts before they ever served an ad. Last year, Google launched over 50 LLM enhancements to enhance its safety enforcement mechanisms across all its platforms. "While these AI models are very, very important to us and have delivered a series of impressive improvements, we still have humans involved throughout the process," said Alex Rodriguez, a general manager for Ads Safety at Google, in a virtual media roundtable. The executive told reporters that a team of over 100 experts assembled across Google, including members from the Ads Safety team, the Trust and Safety division, and researchers from DeepMind. They analyzed deepfake ad scams involving public figure impersonations and developed countermeasures. The company introduced technical countermeasures and over 30 ads and publisher policy updates last year. These moves helped suspend over 700,000 offending advertising accounts, leading to a 90% drop in reports of deepfake ads, the company claims. In the U.S. alone, Google said it suspended 39.2 million advertiser accounts and took down 1.8 billion ads last year, with key violations tied to ad network abuse, trademark misuse, healthcare claims, personalized ads, and misrepresentation. India, the world's most populous country and the second biggest internet market after China in terms of users, saw 2.9 million account suspensions last year, Google said, making it the second-highest after the U.S. The company also removed 247.4 million ads in India, with the top five policy violations related to financial services, trademark misuse, ad network abuse, personalized ads, and gambling and games. Of all the advertiser account suspensions, Google said it suspended 5 million accounts for scam-related violations. Overall, the company removed almost half a billion ads related to scams. Google also verified more than 8,900 new election advertisers in 2024, which saw half of the world's population go to the polls, and removed 10.7 million election ads. However, Rodriguez noted that the volume of election ads compared to Google's overall ad numbers was relatively small and would not significantly impact its safety metrics this year. In total, Google said it blocked 5.1 billion ads last year and removed 1.3 billion pages. In comparison, it blocked over 5.5 billion ads and took action against 2.1 billion publisher pages in 2023. Google told TechCrunch that the decreasing numbers indicated improvements in its prevention efforts. By improving early detection and suspension of malicious accounts, fewer harmful ads are produced or reach the platform, the company said. The company also restricted 9.1 billion ads last year, it said. Importantly, large-scale suspensions sometimes spark concerns over how fairly a company applies its rules. Google said it offers an appeal process that includes human reviews to ensure it took "appropriate action." "Oftentimes, some of our message wasn't as clear and transparent about specifics, about what the rationale was, or reasoning, and sometimes that left the advertiser a little more confused. We ended up updating a bunch of our policies as it related to that, a bunch of our transparency capabilities in terms of the messaging around what and why to help the advertiser...It's been a big focus for the team as part of 2024 and into 2025," Rodriguez said.
[3]
Google Suspended 39 Million Ad Accounts for Fraudulent Activities Last Year
Getting your ads in front of Google users can mean big business -- for scammers, too. Google suspended over 39 million US ad accounts suspected of fraudulent activities in 2024, the company says in an annual ad safety report published Wednesday. The number is three times more than in 2023 (12.7 million), and the most common violations included ad network abuse, trademark infringement, improper use of personalized ads, false healthcare and medicine claims, and misrepresentation. To combat threats posed by ever-evolving bad actors, Google says it added more than 50 enhancements to the large language models (LLM) used for detecting its ad policy violations. These updated AI models require only a "fraction of the information earlier models needed," relying on signals like business impersonation and illegitimate payment details for early indications of fraud. They contributed to 97% of Google's enforcement actions, including the removal of 1.8 billion bad advertisements in the US and 5.1 billion globally. Last year, Google also bolstered its security against deepfake ads used in scams. A team of 100 experts analyzed common trends and updated the company's Misrepresentation policy. This led to the suspension of 700,000 offending advertiser accounts and a 90% drop in reports of deepfake scams, Google claims. In a year that saw major elections across half the globe, Google says it verified more than 8,900 new election advertisers and took down 10.7 million ads from unverified accounts. Restricting legally or culturally sensitive ads was another area of focus. Google restricted 9.1 billion ads in regions where they might be inappropriate; 96 million of those promoted adult content, while 109 million promoted gambling and games, and 34.2 million promoted alcohol. A major chunk of Google's annual revenue stems from its advertising business. The company earned $348.16 billion from ads alone in 2024.
[4]
Google blocked over 5 billion ads in 2024 amid rise in AI-powered scams
Google blocked 5.1 billion ads and suspended more than 39.2 million advertiser accounts in 2024, according to its 2024 Ads Safety Report released this week. The company says the increasing enforcement activity is caused by the growing threat of AI-generated content, impersonation scams, and abuse of its ad platform. In particular, Google highlighted the use of generative AI tools to create deepfake video impersonations of celebrities and public figures to promote scams, which in the BleepingComputer experience are commonly investment and cryptocurrency scams. As a result, Google says it permanently suspended over 700,000 advertiser accounts for policy violations related to AI-driven impersonation scams. "To fight back, we quickly assembled a dedicated team of over 100 experts to analyze these scams and develop effective countermeasures, such as updating our Misrepresentation policy to suspend advertisers that promote these scams," explains Google's 2024 Ads Safety Report. "As a result, we were able to permanently suspend over 700,000 offending advertiser accounts. This led to a 90% drop in reports of this kind of scam ad last year. While we are encouraged by this progress, we continue to work to prevent these scams." While AI may be driving increased scams and malicious advertisements, Google also credits it for helping identify and enforce its policies on publisher pages. In 2024, the company added over 50 enhancements to its machine-learning models to help detect new threats before users could report them. In addition to advertiser suspensions, Google removed or blocked ads across numerous categories: The company also enforced its ad policies against publishers, issuing policy violations against 1.3 billion web pages and 220,000 publisher sites. Election-related advertising also saw an increase in 2024, with the company verifying 8,900 new election advertisers and removing 10.7 million election ads from unverified accounts
[5]
Google Blocked 5.1B Harmful Ads and Suspended 39.2M Advertiser Accounts in 2024
Google on Wednesday revealed that it suspended over 39.2 million advertiser accounts in 2024, with a majority of them identified and blocked by its systems before it could serve harmful ads to users. In all, the tech giant said it stopped 5.1 billion bad ads, restricted 9.1 billion ads, and blocked or restricted ads on 1.3 billion pages last year. It also suspended over 5 million accounts for scam-related violations. The top six ad policy violations included ad network abuse (793.1 million), trademark misuse (503.1 million), personalized ads (491.3 million), legal requirements (280.3 million), financial services (193.7 million), and misrepresentation (146.9 million), among others. Most of the pages on which its ads were blocked or restricted include sexual content, dangerous or derogatory content, malware, shocking content, weapons promotion and sales, online gambling, tobacco, alcohol sale or misuse, intellectual property abuse, and sexually explicit content. Google said it has been using artificial intelligence (AI) powered tools to quickly flag emerging threats and abuse patterns, using signals like business impersonation and illegitimate payment details as early indicators to tackle ad fraud. "For instance, we tackled AI-generated deepfakes used in public figure impersonation scams, suspending over 700,000 accounts for this specific violation as part of our broader effort that blocked/removed 415 million ads and suspended over 5 million accounts for scam-related violations," the company said in a statement shared with The Hacker News. Google also said it expanded Advertiser identity verification to more than 200 countries and territories to increase transparency and enforce requirements for AI-generated content in election ads. As a result, more than 8,900 new election advertisers were verified and over 10.7 million election ads from unverified accounts were removed in 2024. Google's ongoing efforts towards ads safety come as malvertising continues to be an important initial access vector for malware, with threat actors abusing fraudulent ads on Google Search (and other search engines) to drive users to phony sites. "The ad safety landscape is constantly changing, reshaped by technological breakthroughs like advancements in AI, emerging abuse tactics, and global events, demanding continuous agility from our industry," it said.
[6]
Google's AI models shut down 39 million fraudulent advertisers
Google's AI has helped to block billions of harmful ads. Credit: JarTee / Shutterstock Google's use of AI to combat harmful ads has resulted in the suspension 39.2 million fraudulent accounts. On Wednesday, the tech giant published its 2024 Ads Safety Report, highlighting its use of advanced LLMs (Large Language Models) to detect and enforce advertiser fraud. Since 2023, Google has added "50 enhancements" to its LLMs that "need only a fraction of the information earlier models needed to quickly recognize emerging threats, identify patterns of abuse, and distinguish legitimate businesses from scams." Indicators of abuse that can be detected by Google's AI tool include business impersonation and illegal payment details. As a result, Google blocked or removed 5.1 billion ads last year. The majority of those ads Google stopped were due to "abusing the ad network," meaning methods of circumventing Google's review process by tricking users with bait-and-switch ads or using malware. Other ads caught in the sweep involved trademark violations as well as personalized ads that violate Google's policies by targeting users or promoting products based on sensitive topics like personal hardships, identify and belief, and sexual interests. Google is also using AI to fight the increase of bad actors leveraging AI for scams. Deepfakes have become more pervasive and convincing. Last year, actor Tom Hanks' likeness was used to shill medical hoaxes. Scarlett Johansson took legal action against an app for deepfaking her image and voice to promote it. Google went after "bad actors using AI-generated imagery or audio to imply an affiliation with a celebrity to promote a scam" by suspending over 700,000 advertiser accounts, which led to a 90 percent decrease in reports. Overall, it blocked or removed 415 million scam ads. Google's ad safety team said they shut down the majority of scammy accounts before users were ever served an ad. Given the present volume of offensive of harmful stuff on the internet, we shudder to think of the ads that never saw the light of day.
[7]
Google used AI to banish half a billion scam ads in 2024
This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community. The search giant says it suspended 39.2 million advertiser accounts in the US alone in 2024 -- more than triple the number from the previous year. Overall, the company removed almost half a billion ads related to scams. Last year, Google launched over 50 upgrades to its Large Language Models, specifically targetted at rooting out bad actors. "These updates sped up complex investigations, helping us identify bad actors and fraud signals -- like illegitimate payment information -- during account setup," states the firm. "This kept billions of policy-violating ads from ever showing to a consumer, while ensuring legitimate businesses can show ads to customers faster." The company says it also adapted its defenses against evolving scams that use deep fake AI-generated public figure impersonation ads. Scams impersonating businesses and government officials cost Americans $2.95 billion last year, according to the Federal Trade Commission. "To fight back, we assembled a team of over 100 experts to develop countermeasures, such as updating our Misrepresentation policy to suspend advertisers that promote these scams," the company says. "As a result, we were able to permanently suspend more than 700,000 offending advertiser accounts. This led to a 90% drop in reports of this kind of scam ad last year." While the figures are impressive, Google and other Big Tech firms like Meta remain the primary source of scam accounts on the Internet - leading to repeatd calls from financial institutions and fintechs for social media companies to share reimbursement for victims of scams originating on their platforms.
Share
Share
Copy Link
Google's 2024 Ads Safety report reveals a significant increase in suspended advertiser accounts and blocked ads, largely attributed to enhanced AI models. The tech giant's efforts aim to combat fraud, scams, and malicious advertising practices.
In a significant leap forward for online advertising safety, Google has reported a massive increase in suspended advertiser accounts and blocked ads for 2024. The tech giant's annual Ads Safety report, released this week, showcases the power of artificial intelligence in combating fraudulent activities and malicious advertising practices 12.
Google suspended a staggering 39.2 million advertiser accounts in the United States during 2024, more than tripling the 12.7 million accounts suspended in 2023 13. This dramatic increase is largely attributed to the deployment of over 50 enhanced large language models (LLMs) that now form the backbone of Google's ad policy enforcement 1.
The company reports that 97% of its advertising enforcement actions in 2024 involved these AI models 1. These advanced LLMs require less data to make determinations, allowing for more efficient detection of rapidly evolving scam tactics 1. Alex Rodriguez, a general manager for Ads Safety at Google, emphasized that while AI models are crucial, human oversight remains an integral part of the process 2.
Globally, Google blocked or removed 5.1 billion ads in 2024, a slight decrease from 5.5 billion in 2023 14. The company attributes this reduction to improved prevention efforts and early detection of malicious accounts 2. In the U.S. alone, 1.8 billion ads were taken down 1.
As AI-generated content poses new challenges, Google has taken steps to address emerging threats. The company suspended over 700,000 advertiser accounts for violations related to AI-driven impersonation scams, particularly those involving deepfake videos of celebrities and public figures 45. This led to a reported 90% drop in reports of such scam ads 25.
In a year marked by significant global elections, Google verified more than 8,900 new election advertisers and removed 10.7 million election ads from unverified accounts 24. The company also restricted 9.1 billion ads in regions where they might be culturally or legally sensitive, including 96 million ads promoting adult content and 109 million related to gambling and games 3.
Despite these improvements, Google acknowledges that the ad safety landscape is constantly evolving. The company continues to refine its approach, updating policies and improving transparency in its communication with advertisers 2. As malvertising remains a significant vector for malware distribution, Google's efforts represent a crucial step in enhancing online safety and user trust 5.
While these measures are primarily aimed at improving user safety and ad quality, they also have implications for Google's bottom line. In 2024, the company earned $348.16 billion from ads alone, underscoring the importance of maintaining a trustworthy advertising ecosystem 3.
Reference
[4]
Google's Trust & Safety team reveals the most damaging online scams, highlighting the rise of AI-enhanced fraud and cloaking techniques. The tech giant emphasizes the need for user vigilance and improved detection methods.
3 Sources
3 Sources
Google's efforts to secure the Android ecosystem in 2024 resulted in blocking 2.36 million risky apps and banning 158,000 developer accounts. The company attributes this success to AI-assisted reviews and improved security measures.
2 Sources
2 Sources
Google partners with anti-scam organizations to launch the Global Signal Exchange, a platform for sharing data on online fraud and scams, set to begin operations in 2025.
2 Sources
2 Sources
Google is using advanced AI and machine learning algorithms to identify and remove fraudulent reviews, fake business profiles, and suspicious edits on Google Maps, enhancing the platform's reliability and user trust.
4 Sources
4 Sources
Google introduces AI-driven scam detection features for Android devices to protect users from sophisticated text and call-based scams, enhancing cybersecurity while maintaining user privacy.
8 Sources
8 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved