OpenClaw AI agents give hackers control of 28,000 systems through security vulnerabilities

Reviewed byNidhi Govil

3 Sources

Share

SecurityScorecard research reveals that over 28,000 OpenClaw AI agent deployments are exposed to the internet with minimal safeguards. Approximately 63% are vulnerable to remote code execution, with three high-severity vulnerabilities already equipped with public exploit code. The findings highlight a critical gap between rapid AI adoption and security practices.

OpenClaw Security Vulnerabilities Expose Thousands of Systems

A SecurityScorecard investigation has uncovered a troubling reality about AI agents in production environments. The research identified 40,214 internet-exposed instances of OpenClaw, with 28,663 unique IP addresses hosting control panels accessible from anywhere on the internet

1

. These internet-exposed instances represent a significant attack surface, with approximately 63% of observed deployments vulnerable to remote code execution—a critical flaw that allows attackers to seize control of host machines without any user interaction

2

.

Source: TechRadar

Source: TechRadar

The severity of the situation becomes clearer when examining the specific security vulnerabilities involved. Three high-severity Common Vulnerabilities and Exposures affect OpenClaw, with CVSS scores ranging from 7.8 to 8.8

1

. Public exploit code is already available for all three vulnerabilities, meaning attackers don't need advanced skills to compromise exposed systems. The research also found that 549 exposed instances correlate with prior breach activity, while 1,493 are associated with known vulnerabilities that compound the risk

2

.

Source: TweakTown

Source: TweakTown

Excessive Permissions Create Trojan-Like Capabilities

OpenClaw, formerly known as Moltbot and Clawdbot, markets itself as a personal AI agent capable of scheduling meetings, sending emails, and managing tasks on behalf of users

1

. The problem isn't the AI's capabilities but the excessive permissions granted to these autonomous AI agents without proper security controls. "The math is simple: when you give an AI agent full access to your computer, you give that same access to anyone who can compromise it," the researchers stated

3

.

Jeremy Turner, VP of Threat Intelligence at SecurityScorecard, explained that "in practice, because it was written by AI, security wasn't a dominating feature in the development process"

1

. This lack of robust security practices has created a situation where AI agents effectively function as semi-autonomous operators inside systems, capable of making real-time decisions and controlling entire machines

3

. A compromised agent could be instructed to transfer funds, delete files, or send malicious messages without raising immediate alarms because the behavior appears legitimate.

Data Exposure and System Compromise at Scale

The exposed deployments are heavily concentrated in major cloud and hosting providers, indicating repeatable and easily replicated insecure deployment patterns

1

. Many users configure these AI agents with personal names and company names, revealing exactly who is using these tools and making them attractive targets for attackers

2

. When users connect an AI agent to a platform, they give it an identity with specific permissions that may include posting content, accessing email, reading files, or interacting with other systems.

Turner emphasized the scale of the risk: "The risk isn't that these systems are thinking for themselves. It's that we're giving them access to everything"

2

. He compared this to handing your laptop to a stranger on the street and hoping nothing bad happens. The AI-powered malware capabilities enable attackers to automate system monitoring, perform lateral movement across system layers, and conduct data extraction efficiently

3

.

Industry Response and Future Implications for Cybersecurity

The severity of OpenClaw's security vulnerabilities has prompted responses from major technology companies and governments. Microsoft has advised that OpenClaw should not be run on standard personal or enterprise devices, while Chinese authorities have restricted its use in office environments due to its tendency for data exposure and broader security risks

1

. Some vulnerabilities allow hackers to access sensitive data, and OpenClaw has been used to distribute malware through GitHub repositories.

Looking ahead, the OpenClaw situation underscores a fundamental disconnect between AI adoption and security practices. Turner advised users to exercise caution: "Don't just blindly download one of these things and start using it on a system that has access to your whole personal life. Build in some separation and run some experiments of your own before you really trust the new technology to do what you want it to do"

1

. The rise of AI-powered malware marks a significant evolution in cyber threats, where automation meets adaptability. As AI becomes more integrated into offensive tooling, defenders will need to rethink detection strategies to address this growing threat landscape

3

.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved