OpenClaw AI agents expose over 28,000 systems to hackers through critical security flaws

2 Sources

Share

SecurityScorecard reveals that 28,663 internet-exposed OpenClaw AI agent deployments face critical security vulnerabilities, with 63% vulnerable to remote code execution. The research highlights how AI agents deployed with excessive permissions become prime targets for attackers seeking unauthorized system access.

OpenClaw Deployments Leave Thousands of Systems Vulnerable

A SecurityScorecard investigation has uncovered a troubling reality about AI agents in production environments. The research identified 40,214 internet-exposed OpenClaw instances, with 28,663 unique IP addresses hosting control panels accessible from anywhere online

1

. These agentic AI systems, designed to automate tasks like scheduling meetings and managing emails, are moving rapidly from experimental tools into everyday workflows. Yet security practices have failed to keep pace with deployment speed.

Source: TechRadar

Source: TechRadar

OpenClaw, formerly known as Moltbot and Clawdbot, markets itself as a personal AI agent capable of handling routine tasks on behalf of users

1

. The core issue isn't the technology's capabilities but rather the excessive permissions granted without proper safeguards. Approximately 63% of observed deployments appear vulnerable to remote code execution, a severe weakness that allows attackers to take over host machines without any user interaction

1

.

Security Vulnerabilities Create Multiple Attack Vectors

The research uncovered three high-severity Common Vulnerabilities and Exposures affecting OpenClaw, with CVSS scores ranging from 7.8 to 8.8

1

. What makes these security vulnerabilities particularly dangerous is that public exploit code is already available for all three, meaning attackers don't need advanced technical skills to compromise exposed systems. Among the exposed instances, 549 correlate with prior breach activity, while 1,493 are associated with known vulnerabilities that compound the risk

1

.

Jeremy Turner, VP of Threat Intelligence at SecurityScorecard, explained the fundamental problem: "In practice, because it was written by AI, security wasn't a dominating feature in the development process"

1

. The exposed deployments concentrate heavily in major cloud deployment providers, indicating repeatable and easily replicated insecure patterns that could spread further.

Autonomous AI Agents Enable Sophisticated System Compromise

The threat posed by compromised AI agents extends beyond traditional malware concerns. When attackers gain control of these autonomous AI agents, they effectively acquire a semi-autonomous operator inside the system with full machine access

2

. This AI-powered malware can make real-time decisions, automate system monitoring, perform lateral movement across network layers, and conduct data extraction efficiently

2

.

Source: TweakTown

Source: TweakTown

"The math is simple: when you give an AI agent full access to your computer, you give that same access to anyone who can compromise it," researchers stated

1

. A compromised agent could transfer funds, delete files, or send malicious messages without raising immediate alarms because the behavior appears legitimate. The risk of data exposure escalates when users configure these tools with personal and company names, revealing exactly who is using them and creating attractive targets for attackers

1

.

Cybersecurity Threats Demand Immediate Action

The findings reveal a fundamental disconnect between AI adoption and the lack of robust security practices. In some cases, OpenClaw takes actions beyond what users explicitly instruct, prompting Microsoft to advise against running it on standard personal or enterprise devices

1

. Chinese authorities have restricted its use in office environments due to its tendency for data exposure and broader cybersecurity threats

1

. Some vulnerabilities allow unauthorized system access to sensitive data, and the malware has been distributed through GitHub repositories

1

.

Turner emphasized the efficiency advantage for attackers: AI lowers the barrier to entry while increasing efficiency, allowing a single operator to control thousands of endpoints simultaneously

2

. His advice for users is direct: "Don't just blindly download one of these things and start using it on a system that has access to your whole personal life. Build in some separation and run some experiments of your own before you really trust the new technology to do what you want it to do"

1

.

As AI becomes more integrated into offensive tooling, defenders will need to rethink detection strategies

2

. Organizations deploying agentic AI systems must carefully consider which integrations they support and what permissions they actually grant to prevent turning helpful tools into Trojan horses for attackers.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo