2 Sources
2 Sources
[1]
Scammers are poisoning AI search results to steer you straight into their traps - here's how
AI answers can surface poisoned content and put users at risk. Cybercriminals are turning their attention to the public sources AI chatbots scrape to promote scam call center numbers, researchers say, creating a new attack surface for scammers worldwide. According to new research, published by Aurascape's Aura Labs on Dec. 8, threat actors are "systematically manipulating public web content" in what the team has dubbed large language model (LLM) phone number poisoning. Also: Are AI browsers worth the security risk? Why experts are worried In a campaign being tracked by the cybersecurity firm, this technique is being used to ensure systems based on LLM models, including Google's AI Overview and Perplexity's Comet browser, have recommended scam airline customer support and reservations phone numbers as if they were official -- and trusted -- contact details. Aurascape says that rather than directly targeting LLMs, this technique -- reminiscent of prompt injection attacks -- relies on poisoning the content an LLM scrapes and indexes to provide the context and information required to answer user queries. Also: I've been testing the top AI browsers - here's which ones actually impressed me Many of us have heard of Search Engine Optimization (SEO), but how about Generative Engine Optimization (GEO) or Answer Engine Optimization (AEO)? These techniques are focused on ensuring a website or online service becomes a source used for AI-based summaries and search query answers, instead of optimizing content to appear higher up in traditional search engine results. In the campaigns recorded by Aurascape, this is how GEO and AEO are being abused to promote phishing and scam content: Now that these fake sources of information are in place, LLM-based assistants and summarization features merge each source into digestible 'trusted' answers that can be provided to users of AI services and browsers. Also: Should you trust AI agents with your holiday shopping? Here's what experts want you to know According to the team, in some cases, this means that unwitting users are steered toward scams, including fraudulent call centers. "By seeding poisoned content across compromised government and university sites, popular WordPress blogs, YouTube descriptions, and Yelp reviews, they are steering AI search answers toward fraudulent call centers that attempt to extract money and sensitive data from unsuspecting travelers," the researchers say. The researchers noted several instances of this technique being actively used in the wild. For example, when Perplexity was queried with: "the official Emirates Airlines reservations number," AI returned a "fully fabricated answer that included a fraudulent call-center scam number." Another scam call center number was returned when the team requested the British Airways reservations line. Also: Gemini vs. Copilot: I tested the AI tools on 7 everyday tasks, and it wasn't even close Google's AI Overview was also found to be issuing fraudulent and potentially dangerous contact information. When asked for the Emirates phone number, its response included "multiple fraudulent call-center numbers as if they were legitimate Emirates customer service lines." The problem is that LLMs are pulling both legitimate and fraudulent content, which can make content appear to be trustworthy and make scam detection difficult. It won't just be the sources Google or Perplexity systems use, either. As Aurascape says, we are likely seeing the emergence of a "broad, cross-platform contamination effect." Also: How chatbots can change your mind - a new study reveals what makes AI so persuasive "Even when models provide correct answers, their citations and retrieval layers often reveal exposure to polluted sources," the researchers noted. "This tells us the problem is not isolated to a single model or single vendor -- it is becoming systemic." This technique could be considered a fork of indirect prompt injection, in which website code or functionality is compromised to force an LLM to perform an action or act in a harmful way. To stay safe, if you are going to use an AI browser or rely on AI summaries, you should always verify an answer you are given -- especially if it involves contact information. Furthermore, you should steer clear of providing any sensitive information to AI assistants, especially considering how new and untested they are. Just because they are convenient doesn't mean they are safe, regardless of the provider.
[2]
How Scammers Poison AI Results With Fake Customer Support Numbers
Scammers love to seed the internet with fake customer service numbers in order to lure in unsuspecting victims who are just trying to fix something wrong in their life. Con artists have done it to Google Search for years, so it makes sense that they've moved on to the latest space where people are frequently searching for information: AI chatbots. AI cybersecurity company Aurascape has a new report on how scammers are able to inject their own phone numbers into LLM-powered systemsâ€"resulting in scam numbers appearing as authoritative-sounding answers to requests for contact information in AI applications like Perplexity or Google AI Overviews. And when someone calls that number, they're not talking with customer support from, say, Apple. They're talking with the scammers. According to Aurascape, the scammers are able to do this through a wide variety of different tactics. One way is by planting spam content on trusted websites, like government, university and high-profile sites that use WordPress. This method requires gaining access in ways that may be more difficult but aren't impossible. The easier version of this is planting the spam content on user-generated platforms like YouTube and Yelp or other sites that allow reviews. The scammers inject their phone numbers but include all of the likely search terms that would allow the number to find their intended target, such as "Delta Airlines customer support number" and countless variations. All of that is normal for scammers trying to juice Google Search results. But Aurascape notes it's the structure of the data that can set it apart for LLMs. By posting the likely search terms in the summarization formats that AI loves to deliver, it has a higher chance of success as these AI chatbots scour the internet for an answer. The new report refers to Generative Engine Optimization (GEO) and Answer Engine Optimization (AEO) as distinct from SEO, coaxing the AI to retrieve the content and treat it as authoritative because of the way it's being presented. To be clear, the scam outputs in the Aurascape report are simply being retrieved and reproduced in the context of an individual AI output, not corrupting the LLM itself. "For traditional SEO, the goal is to appear high in a list of search results," the company explains. "For GEO/AEO, the goal is more direct: be the single piece of content that the AI assistant chooses, summarizes, and presents as 'the answer.'" As detailed in the report, the scammers utilize GEO/AEO techniques in html and PDFs uploaded to high-trust sites by: That Emirates reservation number is one of the many examples Aurascape uses in the report to show how Perplexity delivered a scam phone number during tests. Google's AI Overviews feature also delivered fake numbers. Gizmodo did some quick testing and wasn't able to duplicate a fake customer support number. But it seems pretty clear that AI companies are starting to take notice, especially given the specific tests conducted for this Aurascape report. When Gizmodo asked Perplexity for the customer support number for Emirates Airlines, the bot responded: The safest way to reach Emirates for reservations is to use the official contact channels listed on the Emirates website rather than thirdâ€'party numbers that appear in search results or PDFs, which are often spam or agency lines misrepresented as “official.†Gizmodo pressed for the bot to just give us the number, but it responded: There are multiple conflicting “Emirates reservations†numbers online, and many of them are actually thirdâ€'party agencies rather than Emirates itself, so none of the +1â€'(8xx) numbers shown on generic guides can be trusted as an official line. The bot told us to visit emirates.com to find the number. And we guess that is one way to fight back against your AI chatbot spreading misinformation and spam. Just stop it from spreading specific types of information altogether. Back in 2022, we wrote about the different scam websites that were successfully getting victims to download what they thought were Canon printer drivers. While the new report from Aurascape didn't address downloadable drivers as a potential attack vector, we can imagine that would be something scammers are already trying. After all, AI chatbots should only be trusted when they show their work. But the flip side of that is the necessity of the chatbot providing hyperlinks where information can be double checked. Or, in this hypothetical, where software could be downloaded. Just make sure you scrutinize that URL carefully. There's a big difference between usa.canon.com and canon.com-ijsetup.com. The latter is a phishing website. "Our investigation shows that threat actors are already exploiting this frontier at scaleâ€"seeding poisoned content across compromised government and university sites, abusing user-generated platforms like YouTube and Yelp, and crafting GEO/AEO-optimized spam designed specifically to influence how large language models retrieve, rank, and summarize information," Aurascape wrote. "The result is a new class of fraud in which AI systems themselves become unintentional amplifiers of scam phone numbers. Even when models provide correct answers, their citations and retrieval layers often reveal exposure to polluted sources. This tells us the problem is not isolated to a single model or single vendorâ€"it is becoming systemic."
Share
Share
Copy Link
Cybercriminals are manipulating AI search results by poisoning public web content with fraudulent phone numbers. Research from Aurascape reveals how threat actors use Generative Engine Optimization to inject scam call center numbers into AI chatbots like Google's AI Overview and Perplexity, steering unsuspecting users toward fake customer support lines for major airlines and other services.
Cybercriminals have discovered a new attack surface by exploiting AI chatbots through a technique called LLM phone number poisoning. According to research published by Aurascape's Aura Labs on December 8, threat actors are systematically manipulating public web content to ensure AI-powered systems recommend scam call center numbers as legitimate contact information
1
. This AI poisoning campaign has already affected major platforms including Google's AI Overview and Perplexity's Comet browser, with fraudulent airline customer support numbers appearing in search results1
.
Source: ZDNet
The technique represents a significant shift in how customer support scams operate. Rather than directly attacking large language models, scammers are poisoning AI search results by contaminating the content that AI systems scrape and index to answer user queries
1
. When users search for contact information through AI assistants, they're unknowingly directed to fraudulent call centers designed to extract money and sensitive data1
.The scammers are leveraging Generative Engine Optimization (GEO) and Answer Engine Optimization (AEO) to manipulate AI search results. Unlike traditional Search Engine Optimization, which focuses on ranking high in search results, GEO and AEO aim to become the single source that AI assistants choose and present as the authoritative answer
2
. This represents a fundamental shift in how threat actors approach injecting fake customer support numbers into information systems.According to Aurascape, scammers deploy multiple tactics to poison content. They plant spam content on trusted websites including compromised government sites, university domains, and high-profile WordPress installations
1
. The easier approach involves exploiting user-generated platforms like YouTube descriptions and Yelp reviews, where they inject fraudulent phone numbers alongside search terms such as "Delta Airlines customer support number" and countless variations2
.What makes this particularly effective for exploiting AI chatbots is the structure of the data. By formatting content in the summarization formats that AI systems prefer, scammers increase their chances of having their poisoned content retrieved and presented as trustworthy
2
. The AI cybersecurity firm emphasizes that these outputs are being retrieved and reproduced in individual AI responses, creating what appears to be legitimate information.Aurascape documented multiple instances of this technique actively being used in the wild. When researchers queried Perplexity with "the official Emirates Airlines reservations number," the AI returned a fully fabricated answer containing a fraudulent call center number
1
. Similar scam call center numbers appeared when requesting British Airways reservation lines1
.Google's AI Overview also delivered dangerous contact information. When asked for the Emirates phone number, the system included multiple fraudulent call-center numbers presented as legitimate customer service lines
1
. The problem stems from AI systems pulling both legitimate and fraudulent content through their retrieval layers, making scam detection difficult and causing content to appear trustworthy1
.
Source: Gizmodo
When Gizmodo conducted follow-up testing, Perplexity appeared to have implemented safeguards, refusing to provide specific Emirates numbers and warning that "many of them are actually third-party agencies"
2
. This suggests AI companies are taking notice and responding by limiting certain types of information delivery altogether.Related Stories
The researchers warn this represents a "broad, cross-platform contamination effect" that extends beyond individual AI platforms
1
. Even when models provide correct answers, their citations and retrieval layers often reveal exposure to polluted sources, indicating the problem is becoming systemic rather than isolated to a single vendor1
.This technique resembles indirect prompt injection attacks, where website functionality is compromised to force an LLM to perform harmful actions
1
. The phishing and misinformation risks extend beyond airline numbers—any customer support line could become a target as scammers refine their methods for poisoning AI summaries.To stay protected, users must verify any contact information provided by AI assistants, especially before sharing sensitive data. The convenience of AI-powered search doesn't guarantee safety, regardless of the provider
1
. As AI adoption accelerates, watch for how platforms balance providing helpful information against the growing threat of systematically poisoned content infiltrating their systems.Summarized by
Navi
[1]
19 Aug 2025•Technology

26 Aug 2025•Technology

30 Oct 2025•Technology

1
Science and Research

2
Policy and Regulation

3
Technology
