AI-driven bot attacks surge 12.5x as bad bots claim 40% of all internet traffic

2 Sources

Share

Thales reports that bad bots now represent nearly 40% of global internet traffic, with AI-driven bot attacks surging 12.5 times in 2025. The rise of AI agents is blurring the line between legitimate automation and malicious intent, as 27% of attacks now target APIs. Financial services bore the brunt, accounting for 46% of all account takeovers last year.

AI-Driven Bot Attacks Redefine the Threat Landscape

The internet has entered a new phase where machines dominate human activity, and the threat posed by bad bots has reached alarming proportions. According to the 2026 Bad Bot Report released by Thales, automated bots now account for more than 53% of all internet traffic, while bad bots alone represent nearly 40% of global web activity

1

2

. More concerning is the explosive growth in AI-driven bot attacks, which surged 12.5 times in 2025 compared to the previous year

2

. This surge signals a fundamental shift in how cyberattacks are conducted, with AI-accelerated automation transforming bots from simple scripts into sophisticated entities capable of complex operations.

Source: CXOToday

Source: CXOToday

The findings from Thales' Threat Research and Security Analyst Services teams reveal that bots surpassing human interaction is no longer a temporary phenomenon but a structural change in how the internet operates

1

. Human activity has fallen to just 47% of web traffic, down from 49% the previous year, while automated activity continues its relentless climb

2

. This shift matters because it changes the baseline assumption of digital security: organizations can no longer design systems primarily for human users when machines are the dominant force online.

Source: TechRadar

Source: TechRadar

AI Agents Mimic Human Behavior and Blur Security Lines

The evolution of malicious bot activity has moved far beyond traditional credential stuffing and price scraping operations. AI agents now represent a third category of internet traffic, sitting alongside traditional good and bad bots, and they interact directly with applications and backend systems to perform complex tasks

2

. These AI agents mimic human behavior with alarming precision, making it increasingly difficult to distinguish between legitimate and malicious activity

1

.

Tim Chang, Global Vice President and General Manager of Application Security at Thales, emphasized this challenge: "AI is transforming automation from something organizations try to block into something they must also manage. The challenge is no longer identifying bots. It's understanding what the bot, agent, or automation is doing, whether it aligns with business intent, and how it interacts with critical systems"

1

2

.

This visibility gap creates significant risk for organizations operating with an incomplete view of threats. Much of today's AI-driven activity remains unverified or indistinguishable from legitimate traffic, meaning security teams are essentially operating blind when it comes to understanding the true nature of automated interactions with their systems

2

.

Attacks Target APIs and Financial Services Bear the Brunt

A significant portion of malicious bot activity—approximately 27%—now specifically attacks target APIs, where bots can bypass user interfaces and interact directly with backend systems at machine speed

1

2

. These attacks often appear legitimate, using valid authentication and well-formed requests, but they exploit business logic, extract sensitive data, or manipulate workflows at scale

2

.

The financial services sector has become a prime target for these sophisticated attacks. The industry accounted for 24% of all bot attacks and suffered 46% of all account takeovers in 2025

1

2

. This concentration underscores how automation is being weaponized to directly monetize cyberattacks, with attackers focusing their efforts where the financial rewards are greatest. Bots target identity systems as a gateway to these high-value accounts, exploiting the very mechanisms designed to authenticate legitimate users.

Traditional Security Approaches Are Inadequate for the Agentic Age

The Bad Bot Report makes clear that traditional security approaches focused solely on identifying and blocking bots are no longer sufficient in an environment where automation is both pervasive and often legitimate

2

. Organizations must shift toward automation governance models that combine visibility, policy enforcement, and behavioral analysis to manage this new reality

2

.

This means defining which AI agents are allowed to interact with systems, implementing controls at the API security and identity layer, and designing defenses that can adapt as bots evolve

2

. The short-term implication is that security teams need to urgently reassess their detection capabilities and invest in tools that can analyze intent rather than just identify automation. Long-term, organizations will need to build entirely new frameworks for managing machine-to-machine interactions as AI agents become standard participants in digital ecosystems.

What remains uncertain is how quickly attackers will continue to refine their techniques and whether defensive technologies can keep pace. As AI capabilities advance, the sophistication of AI agents mimic human behavior will only increase, potentially reaching a point where distinguishing automated from human activity becomes nearly impossible without fundamentally rethinking Application Security architecture. Organizations should watch for further increases in API-targeted attacks and monitor how regulatory frameworks evolve to address this machine-driven internet landscape.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved