2 Sources
2 Sources
[1]
AI agents like OpenClaw could do more harm than good
* OpenClaw exposures reveal thousands of internet accessible high risk systems * AI agents are being deployed with excessive permissions across critical environments * Remote code execution vulnerabilities expose most observed OpenClaw deployments Agentic systems are moving quickly from experimentation into everyday workflows, yet recent findings suggest security practices are not keeping pace. According to SecurityScorecard, thousands of OpenClaw deployments are exposed directly to the internet with minimal safeguards. The team identified 40,214 internet-exposed OpenClaw instances in total, with 28,663 unique IP addresses hosting control panels accessible from anywhere on the internet. Exposed AI agents become a hacker's dream target "The math is simple: when you give an AI agent full access to your computer, you give that same access to anyone who can compromise it," the researchers stated. Approximately 63% of observed deployments appear vulnerable to remote code execution, allowing attackers to take over the host machine without user interaction. Of the exposures, there were three high-severity Common Vulnerabilities and Exposures affecting OpenClaw, with CVSS scores ranging from 7.8 to 8.8. Public exploit code is already available for all three vulnerabilities, meaning attackers do not need advanced skills to compromise exposed systems. The research also found that 549 exposed instances correlate with prior breach activity, and 1,493 are associated with known vulnerabilities that compound the risk for users. The exposed deployments are heavily concentrated in major cloud and hosting providers, indicating repeatable and easily replicated insecure deployment patterns. OpenClaw, formerly known as Moltbot and Clawdbot, markets itself as a personal AI agent that can schedule meetings, send emails, and manage tasks on behalf of users. The problem is not the AI's capabilities but the access and permissions granted to these systems without proper security controls. "In practice, because it was written by AI, security wasn't a dominating feature in the development process," said Jeremy Turner, VP of Threat Intelligence at SecurityScorecard. "For the folks that want to use the more agentic AI systems, you really need to take careful consideration in what integrations you support and what permissions you actually give." Many users are configuring these bots with personal names and company names, revealing exactly who is using these AI tools and making them attractive targets for attackers. Any time a user connects an AI agent to a platform, they are giving it an identity with specific permissions. That identity may be able to post content, access email, read files, or interact with other systems on the user's behalf. "The risk isn't that these systems are thinking for themselves," Turner said. "It's that we're giving them access to everything." "It's like handing your laptop to a stranger on the street and hoping nothing bad happens... Any of the communications... on that device... are going to be interfaces from untrusted third parties that can... take certain actions." A compromised agent could be instructed to transfer funds, delete files, or send malicious messages without raising immediate alarms because the behavior appears legitimate. Unfortunately, the report reveals a fundamental disconnect between AI adoption and security practices. Users are being asked to give these agents broad system access, and in many cases, that has already led to data exposure, unintended actions, and loss of control. In some cases, OpenClaw takes actions beyond what users explicitly instruct, and Microsoft has since advised that it should not be run on standard personal or enterprise devices. Chinese authorities have restricted its use in office environments due to its tendency for data exposure and broader security risks. Some OpenClaw vulnerabilities allow hackers to access sensitive data, and it has been used to distribute malware through GitHub repositories. "Don't just blindly download one of these things and start using it on a system that has access to your whole personal life. Build in some separation and run some experiments of your own before you really trust the new technology to do what you want it to do," Turner said. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.
[2]
OpenClaw trojan uses AI agents to take control of 28,000 systems
TL;DR: OpenClaw is a new AI-driven Trojan that has compromised over 28,000 systems by using autonomous AI agents to control, monitor, and extract data from infected machines. This malware represents a major cyber threat evolution, enabling attackers to manage thousands of endpoints efficiently and adaptively. A new Trojan dubbed "OpenClaw" is raising serious alarms, with researchers warning that AI agents are now being weaponized to take full control of thousands of systems. Security analysts report that OpenClaw has already compromised more than 28,000 machines, leveraging AI-driven automation to execute commands, adapt to environments, and maintain on these systems in ways that traditional malware struggles to achieve. The key concern here isn't the scale or the number of affected machines, but the infection's capability. For example, OpenClaw effectively hands attackers a semi-autonomous operator inside a system, and this operator has access to the entire machine. According to a TechRadar report, the malware uses these AI agents to dynamically interact with compromised environments. Because the AI agents have access to the entire machine, the malware can make real-time decisions and control the system. This means attackers can automate system monitoring, perform lateral movement across the layers of the system they have access to, and even conduct data extraction. "The math is simple: when you give an AI agent full access to your computer, you give that same access to anyone who can compromise it," the researchers The report outlines that AI lowers the barrier to entry while increasing efficiency, allowing a single operator to control thousands of endpoints simultaneously. The capabilities of this hack mark a significant evolution in cyber threats, where automation meets adaptability. "Don't just blindly download one of these things and start using it on a system that has access to your whole personal life. Build in some separation and run some experiments of your own before you really trust the new technology to do what you want it to do," said Jeremy Turner, VP of Threat Intelligence at SecurityScorecard Looking ahead, OpenClaw underscores a growing shift in cybersecurity. As AI becomes more integrated into offensive tooling, defenders will need to rethink detection strategies. The rise of AI-powered malware isn't theoretical anymore. It's already here, and it appears to be scaling parallel to the sophistication level of AI.
Share
Share
Copy Link
SecurityScorecard reveals that 28,663 internet-exposed OpenClaw AI agent deployments face critical security vulnerabilities, with 63% vulnerable to remote code execution. The research highlights how AI agents deployed with excessive permissions become prime targets for attackers seeking unauthorized system access.
A SecurityScorecard investigation has uncovered a troubling reality about AI agents in production environments. The research identified 40,214 internet-exposed OpenClaw instances, with 28,663 unique IP addresses hosting control panels accessible from anywhere online
1
. These agentic AI systems, designed to automate tasks like scheduling meetings and managing emails, are moving rapidly from experimental tools into everyday workflows. Yet security practices have failed to keep pace with deployment speed.
Source: TechRadar
OpenClaw, formerly known as Moltbot and Clawdbot, markets itself as a personal AI agent capable of handling routine tasks on behalf of users
1
. The core issue isn't the technology's capabilities but rather the excessive permissions granted without proper safeguards. Approximately 63% of observed deployments appear vulnerable to remote code execution, a severe weakness that allows attackers to take over host machines without any user interaction1
.The research uncovered three high-severity Common Vulnerabilities and Exposures affecting OpenClaw, with CVSS scores ranging from 7.8 to 8.8
1
. What makes these security vulnerabilities particularly dangerous is that public exploit code is already available for all three, meaning attackers don't need advanced technical skills to compromise exposed systems. Among the exposed instances, 549 correlate with prior breach activity, while 1,493 are associated with known vulnerabilities that compound the risk1
.Jeremy Turner, VP of Threat Intelligence at SecurityScorecard, explained the fundamental problem: "In practice, because it was written by AI, security wasn't a dominating feature in the development process"
1
. The exposed deployments concentrate heavily in major cloud deployment providers, indicating repeatable and easily replicated insecure patterns that could spread further.The threat posed by compromised AI agents extends beyond traditional malware concerns. When attackers gain control of these autonomous AI agents, they effectively acquire a semi-autonomous operator inside the system with full machine access
2
. This AI-powered malware can make real-time decisions, automate system monitoring, perform lateral movement across network layers, and conduct data extraction efficiently2
.
Source: TweakTown
"The math is simple: when you give an AI agent full access to your computer, you give that same access to anyone who can compromise it," researchers stated
1
. A compromised agent could transfer funds, delete files, or send malicious messages without raising immediate alarms because the behavior appears legitimate. The risk of data exposure escalates when users configure these tools with personal and company names, revealing exactly who is using them and creating attractive targets for attackers1
.Related Stories
The findings reveal a fundamental disconnect between AI adoption and the lack of robust security practices. In some cases, OpenClaw takes actions beyond what users explicitly instruct, prompting Microsoft to advise against running it on standard personal or enterprise devices
1
. Chinese authorities have restricted its use in office environments due to its tendency for data exposure and broader cybersecurity threats1
. Some vulnerabilities allow unauthorized system access to sensitive data, and the malware has been distributed through GitHub repositories1
.Turner emphasized the efficiency advantage for attackers: AI lowers the barrier to entry while increasing efficiency, allowing a single operator to control thousands of endpoints simultaneously
2
. His advice for users is direct: "Don't just blindly download one of these things and start using it on a system that has access to your whole personal life. Build in some separation and run some experiments of your own before you really trust the new technology to do what you want it to do"1
.As AI becomes more integrated into offensive tooling, defenders will need to rethink detection strategies
2
. Organizations deploying agentic AI systems must carefully consider which integrations they support and what permissions they actually grant to prevent turning helpful tools into Trojan horses for attackers.Summarized by
Navi
[1]
04 Feb 2026•Technology

30 Mar 2026•Technology

09 Mar 2026•Technology

1
Policy and Regulation

2
Technology

3
Technology
