The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Fri, 15 Nov, 4:01 PM UTC
3 Sources
[1]
These are the most damaging scams around, according to Google -- so be on your guard
Google has revealed a new report outlining the most common techniques threat actors are using against victims, highlighting a practice known as 'Cloaking' as a way to deceive users into disclosing sensitive information. The technique uses tools called 'cloakers' to show different content to different users based on identifying information such as IP addresses. Often, cloaking will involve showing one version of a landing page or website to search engines and bots, and another version to real human users. "Cloaking is specifically designed to prevent moderation systems and teams from reviewing policy-violating content which enables them to deploy the scam directly to users", Laurie Richardson, Vice President, Trust & Safety at Google wrote in the report. Cloaking does have some legitimate uses, such as for advertisers who want to prevent their pages from being scraped by bots, or who want to hide their strategies from competitors. However, Google has observed scammers using cloaking tools to redirect users who click an ad to scareware sites. This then tricks users into believing their device is infected with malware, or that their account has been blocked due to unauthorized activity - which tricks them into a false 'customer support' site, to which they reveal sensitive information. "The landing pages often mimic well-known sites and create a sense of urgency to manipulate users into purchasing counterfeit products or unrealistic products." Google says. Other techniques outlined were the exploitation of major events. Scammers take advantage of significant events such as elections, sports fixtures, or humanitarian disasters. The well established technique is being bolstered by AI tools, which are able to quickly respond to breaking news and advertise false products and services. Elsewhere, Google also flagged fake charity scams, which aim to defraud people looking to donate to relief efforts and set up appeals to seem legitimate, with AI tools being used to produce huge amounts of content to overwhelm users to deceive them into clicking malicious links. "Preventing user harm from malicious scams requires effective cooperation across the online ecosystem," Richardson concluded. "Bad actors are constantly evolving their tactics and techniques...we're sharpening our detection and enforcement techniques to meet these threats, enhancing our proactive capabilities, and ensuring we have robust and fair policies in place to protect people."
[2]
Google Warns of Rising Cloaking Scams, AI-Driven Fraud, and Crypto Schemes
Google has revealed that bad actors are leveraging techniques like landing page cloaking to conduct scams by impersonating legitimate sites. "Cloaking is specifically designed to prevent moderation systems and teams from reviewing policy-violating content which enables them to deploy the scam directly to users," Laurie Richardson, VP and Head of Trust and Safety at Google, said. "The landing pages often mimic well-known sites and create a sense of urgency to manipulate users into purchasing counterfeit products or unrealistic products." Cloaking refers to the practice of serving different content to search engines like Google and users with the ultimate goal of manipulating search rankings and deceiving users. The tech giant said it has also observed a cloaking trend wherein users clicking on ads are redirected via tracking templates to scareware sites that claim their devices are compromised with malware and lead them to other phony customer support sites, which trick them into revealing sensitive information. Some of the other recent tactics adopted by fraudsters and cybercriminals are listed below - Google told The Hacker News that it intends to release such advisories about online fraud and scams every six months as part of its efforts to raise awareness about the risks. Many of the cryptocurrency-related scams such as pig butchering originate from Southeast Asia and are run by organized crime syndicates from China, who lure individuals with the prospect of high-paying jobs, only to be confined within scam factories located across Burma, Cambodia, Laos, Malaysia, and the Philippines. A report published by the United Nations last month revealed that criminal syndicates in the region are stepping up by swiftly integrating "new service-based business models and technologies including malware, generative AI, and deepfakes into their operations while opening up new underground markets and cryptocurrency solutions for their money laundering needs." The U.N. Office on Drugs and Crime (UNODC) described the incorporation of generative AI and other technological advancements in cyber-enabled fraud as a "powerful force multiplier," not only making it more efficient but also lowering the bar for entry to technically less-savvy criminals. Google, earlier this April, sued two app developers based in Hong Kong and Shenzhen for distributing fake Android apps that were used to pull off consumer investment fraud schemes. Late last month, the company, alongside Amazon, filed a lawsuit against a website named Bigboostup.com for selling and posting fake reviews on Amazon and Google Maps. "The website sold fake product reviews to bad actors to publish on their product listing pages in Amazon's store and fake reviews of business listings on Google Search and Google Maps," Amazon said. The development comes a little over a month after Google announced a partnership with the Global Anti-Scam Alliance (GASA) and DNS Research Federation (DNS RF) to tackle online scams. Furthermore, the company said it has blocked or removed more than 5.5 billion advertisements for violating its policies in 2023 alone, and that it's rolling out live scam detection in its Phone app for Android to secure users against potential scams and fraud by making use of its Gemini Nano on-device AI model. "For example, if a caller claims to be from your bank and asks you to urgently transfer funds due to an alleged account breach, Scam Detection will process the call to determine whether the call is likely spam and, if so, can provide an audio and haptic alert and visual warning that the call may be a scam," it said. Another new security feature is the introduction of real-time alerts in Google Play Protect to notify users of potentially malicious apps like stalkerware installed on their devices. "By looking at actual activity patterns of apps, live threat detection can now find malicious apps that try extra hard to hide their behavior or lie dormant for a time before engaging in suspicious activity," Google noted.
[3]
Google says these are the top 5 online threats targeting you right now
Google is strengthening policies to block deceptive ads, apps, and sites. In an attempt to raise user awareness about the growing threats posed by online fraud, Google is starting a regular advisory for consumers to keep them updated and informed about the latest scams. The endeavor comes from Google's Trust & Safety team, which tracks and addresses online scams to protect users. In its inaugural advisory, Google lists five of the most concerning online scams going around right now. These include the following: Public Figure Impersonation Campaigns: In such campaigns, scammers use deepfake technology to impersonate a public figure and promote fraudulent schemes. These schemes often misuse AI to create highly convincing content, tricking users into engaging with fake giveaways or high-return investment scams. Crypto Investment Scams: Scammers use hyper-realistic impersonations of trusted public figures and brands to lure victims into schemes that promise too-good-to-be-true returns. Google says these scams often originate from organized crime networks and are difficult for law enforcement to combat due to their international nature. Cloned Apps and Phishing Sites: Scammers are cloning legitimate apps and websites to steal personal information or spread malware. Common tactics include creating fake tech support landing pages or exploiting employee login portals to access sensitive corporate data. Landing Page Cloaking: Scammers use tricks to deceive Google into seeing different content on a landing page than what a user sees. These landing pages often look like legitimate sites and present users with a sense of urgency to take actions that could compromise their security. Exploitation of Major Events: Lastly, Google says scammers are increasingly using AI to enhance traditional fraud techniques, targeting major events like elections, sports, and disasters with fake products and services. By sharing these findings, Google hopes to help users stay vigilant and avoid falling victim to increasingly sophisticated online scams.
Share
Share
Copy Link
Google's Trust & Safety team reveals the most damaging online scams, highlighting the rise of AI-enhanced fraud and cloaking techniques. The tech giant emphasizes the need for user vigilance and improved detection methods.
In a bid to raise awareness about evolving online dangers, Google's Trust & Safety team has released a comprehensive report detailing the most prevalent and damaging scams currently targeting internet users. The tech giant's initiative aims to keep consumers informed and vigilant against increasingly sophisticated fraudulent activities 1.
One of the most concerning techniques highlighted in the report is 'cloaking.' This practice involves showing different content to different users based on identifying information such as IP addresses. Laurie Richardson, Vice President of Trust & Safety at Google, explains, "Cloaking is specifically designed to prevent moderation systems and teams from reviewing policy-violating content which enables them to deploy the scam directly to users" 2.
Scammers are using cloaking tools to redirect users who click on ads to scareware sites, tricking them into believing their devices are infected with malware or that their accounts have been compromised. This manipulation often leads users to fake 'customer support' sites where they may unknowingly disclose sensitive information 1.
The report also highlights how artificial intelligence is being leveraged to enhance traditional fraud techniques. Scammers are using AI tools to quickly respond to breaking news and advertise false products and services, particularly during major events such as elections, sports fixtures, or humanitarian disasters 1.
Cryptocurrency-related scams, including the notorious 'pig butchering' scheme, are becoming increasingly prevalent. These scams often originate from Southeast Asia and are run by organized crime syndicates. They lure individuals with promises of high-paying jobs, only to trap them in scam factories across various countries in the region 2.
To combat these threats, Google is implementing several measures:
Google emphasizes the need for user awareness and caution. The company plans to release similar advisories every six months to keep users informed about the latest online threats 3. Richardson concludes, "Preventing user harm from malicious scams requires effective cooperation across the online ecosystem. Bad actors are constantly evolving their tactics and techniques...we're sharpening our detection and enforcement techniques to meet these threats" 1.
Reference
[2]
[3]
Google partners with anti-scam organizations to launch the Global Signal Exchange, a platform for sharing data on online fraud and scams, set to begin operations in 2025.
2 Sources
2 Sources
Google introduces AI-driven scam detection features for Android devices to protect users from sophisticated text and call-based scams, enhancing cybersecurity while maintaining user privacy.
8 Sources
8 Sources
A sophisticated AI-powered scam targeting Gmail users combines deepfake robocalls and fake emails to gain access to accounts, highlighting the growing threat of AI in cybercrime.
2 Sources
2 Sources
Google's 2024 Ads Safety report reveals a significant increase in suspended advertiser accounts and blocked ads, largely attributed to enhanced AI models. The tech giant's efforts aim to combat fraud, scams, and malicious advertising practices.
7 Sources
7 Sources
A sophisticated AI-based scam targeting Gmail users combines spoofed phone numbers, fake emails, and AI-generated voices to trick victims into revealing their account credentials.
11 Sources
11 Sources