Scammers exploit AI chatbots with fake customer support numbers through content poisoning

Reviewed byNidhi Govil

2 Sources

Share

Cybercriminals are manipulating AI search results by poisoning public web content with fraudulent phone numbers. Research from Aurascape reveals how threat actors use Generative Engine Optimization to inject scam call center numbers into AI chatbots like Google's AI Overview and Perplexity, steering unsuspecting users toward fake customer support lines for major airlines and other services.

Cybercriminals Target AI Chatbots With LLM Phone Number Poisoning

Cybercriminals have discovered a new attack surface by exploiting AI chatbots through a technique called LLM phone number poisoning. According to research published by Aurascape's Aura Labs on December 8, threat actors are systematically manipulating public web content to ensure AI-powered systems recommend scam call center numbers as legitimate contact information

1

. This AI poisoning campaign has already affected major platforms including Google's AI Overview and Perplexity's Comet browser, with fraudulent airline customer support numbers appearing in search results

1

.

Source: ZDNet

Source: ZDNet

The technique represents a significant shift in how customer support scams operate. Rather than directly attacking large language models, scammers are poisoning AI search results by contaminating the content that AI systems scrape and index to answer user queries

1

. When users search for contact information through AI assistants, they're unknowingly directed to fraudulent call centers designed to extract money and sensitive data

1

.

How Generative Engine Optimization Enables Content Manipulation

The scammers are leveraging Generative Engine Optimization (GEO) and Answer Engine Optimization (AEO) to manipulate AI search results. Unlike traditional Search Engine Optimization, which focuses on ranking high in search results, GEO and AEO aim to become the single source that AI assistants choose and present as the authoritative answer

2

. This represents a fundamental shift in how threat actors approach injecting fake customer support numbers into information systems.

According to Aurascape, scammers deploy multiple tactics to poison content. They plant spam content on trusted websites including compromised government sites, university domains, and high-profile WordPress installations

1

. The easier approach involves exploiting user-generated platforms like YouTube descriptions and Yelp reviews, where they inject fraudulent phone numbers alongside search terms such as "Delta Airlines customer support number" and countless variations

2

.

What makes this particularly effective for exploiting AI chatbots is the structure of the data. By formatting content in the summarization formats that AI systems prefer, scammers increase their chances of having their poisoned content retrieved and presented as trustworthy

2

. The AI cybersecurity firm emphasizes that these outputs are being retrieved and reproduced in individual AI responses, creating what appears to be legitimate information.

Real-World Examples Show Widespread Contamination

Aurascape documented multiple instances of this technique actively being used in the wild. When researchers queried Perplexity with "the official Emirates Airlines reservations number," the AI returned a fully fabricated answer containing a fraudulent call center number

1

. Similar scam call center numbers appeared when requesting British Airways reservation lines

1

.

Google's AI Overview also delivered dangerous contact information. When asked for the Emirates phone number, the system included multiple fraudulent call-center numbers presented as legitimate customer service lines

1

. The problem stems from AI systems pulling both legitimate and fraudulent content through their retrieval layers, making scam detection difficult and causing content to appear trustworthy

1

.

Source: Gizmodo

Source: Gizmodo

When Gizmodo conducted follow-up testing, Perplexity appeared to have implemented safeguards, refusing to provide specific Emirates numbers and warning that "many of them are actually third-party agencies"

2

. This suggests AI companies are taking notice and responding by limiting certain types of information delivery altogether.

Cross-Platform Contamination Creates Systemic Risk

The researchers warn this represents a "broad, cross-platform contamination effect" that extends beyond individual AI platforms

1

. Even when models provide correct answers, their citations and retrieval layers often reveal exposure to polluted sources, indicating the problem is becoming systemic rather than isolated to a single vendor

1

.

This technique resembles indirect prompt injection attacks, where website functionality is compromised to force an LLM to perform harmful actions

1

. The phishing and misinformation risks extend beyond airline numbers—any customer support line could become a target as scammers refine their methods for poisoning AI summaries.

To stay protected, users must verify any contact information provided by AI assistants, especially before sharing sensitive data. The convenience of AI-powered search doesn't guarantee safety, regardless of the provider

1

. As AI adoption accelerates, watch for how platforms balance providing helpful information against the growing threat of systematically poisoned content infiltrating their systems.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo