AI-Generated Fake Reviews: A Growing Threat to Online Consumer Trust

8 Sources

Share

The rise of AI tools is exacerbating the problem of fake online reviews, posing new challenges for businesses and consumers alike. Tech companies and watchdogs are working to detect and mitigate this emerging threat.

News article

The Rise of AI-Generated Fake Reviews

The internet has long been plagued by fake reviews, but the emergence of generative artificial intelligence (AI) tools has intensified this problem. These AI-powered text generation tools, popularized by platforms like OpenAI's ChatGPT, are enabling fraudsters to produce fake reviews faster and in greater volumes than ever before

1

2

3

.

Scope and Impact of the Problem

Fake reviews are pervasive across various industries, including e-commerce, hospitality, and professional services. The Transparency Company, a watchdog group, analyzed 73 million reviews in the home, legal, and medical services sectors. Their findings revealed that nearly 14% of these reviews were likely fake, with 2.7 million reviews partly or entirely generated by AI

1

2

3

.

The problem is not limited to written reviews. In August 2023, software company DoubleVerify reported a significant increase in mobile phone and smart TV apps with AI-crafted reviews, often used to deceive users into installing malicious apps

1

2

3

.

Legal and Regulatory Responses

The Federal Trade Commission (FTC) has taken action against this growing threat. In September 2023, the FTC sued the company behind Rytr, an AI writing tool and content generator, accusing it of facilitating the creation of fraudulent reviews. The FTC has also banned the sale or purchase of fake reviews

1

2

3

4

.

Detection and Mitigation Efforts

Tech companies and researchers are developing methods to detect AI-generated reviews. Max Spero, CEO of AI detection company Pangram Labs, reported that their software has identified AI-generated reviews on major platforms like Amazon and Yelp

1

2

3

.

However, detection remains challenging. Amazon has stated that external parties may fall short in identifying fake reviews due to limited access to data signals indicating patterns of abuse

1

2

3

.

The Complexity of AI-Generated Content

Not all AI-generated reviews are inherently fake or malicious. Some consumers may use AI tools to articulate their genuine experiences more effectively, particularly non-native English speakers

1

2

3

4

.

Corporate Responses and Policies

Major companies are developing policies to address AI-generated content within their review systems. Amazon and Trustpilot, for example, allow AI-assisted reviews as long as they reflect genuine customer experiences. Yelp has taken a more cautious approach, requiring reviewers to write their own content

1

2

3

4

.

Industry Collaboration

The Coalition for Trusted Reviews, which includes major players like Amazon, Trustpilot, Glassdoor, Tripadvisor, Expedia, and Booking.com, acknowledges the dual nature of AI in this context. While AI can be used for deception, it also presents opportunities to combat misleading reviews

1

2

3

4

5

.

As the battle against fake reviews intensifies, the integration of AI tools into both the creation and detection of fraudulent content marks a new chapter in the ongoing struggle to maintain trust in online consumer feedback systems.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo