3 Sources
[1]
AI agents like OpenClaw could do more harm than good
* OpenClaw exposures reveal thousands of internet accessible high risk systems * AI agents are being deployed with excessive permissions across critical environments * Remote code execution vulnerabilities expose most observed OpenClaw deployments Agentic systems are moving quickly from experimentation into everyday workflows, yet recent findings suggest security practices are not keeping pace. According to SecurityScorecard, thousands of OpenClaw deployments are exposed directly to the internet with minimal safeguards. The team identified 40,214 internet-exposed OpenClaw instances in total, with 28,663 unique IP addresses hosting control panels accessible from anywhere on the internet. Exposed AI agents become a hacker's dream target "The math is simple: when you give an AI agent full access to your computer, you give that same access to anyone who can compromise it," the researchers stated. Approximately 63% of observed deployments appear vulnerable to remote code execution, allowing attackers to take over the host machine without user interaction. Of the exposures, there were three high-severity Common Vulnerabilities and Exposures affecting OpenClaw, with CVSS scores ranging from 7.8 to 8.8. Public exploit code is already available for all three vulnerabilities, meaning attackers do not need advanced skills to compromise exposed systems. The research also found that 549 exposed instances correlate with prior breach activity, and 1,493 are associated with known vulnerabilities that compound the risk for users. The exposed deployments are heavily concentrated in major cloud and hosting providers, indicating repeatable and easily replicated insecure deployment patterns. OpenClaw, formerly known as Moltbot and Clawdbot, markets itself as a personal AI agent that can schedule meetings, send emails, and manage tasks on behalf of users. The problem is not the AI's capabilities but the access and permissions granted to these systems without proper security controls. "In practice, because it was written by AI, security wasn't a dominating feature in the development process," said Jeremy Turner, VP of Threat Intelligence at SecurityScorecard. "For the folks that want to use the more agentic AI systems, you really need to take careful consideration in what integrations you support and what permissions you actually give." Many users are configuring these bots with personal names and company names, revealing exactly who is using these AI tools and making them attractive targets for attackers. Any time a user connects an AI agent to a platform, they are giving it an identity with specific permissions. That identity may be able to post content, access email, read files, or interact with other systems on the user's behalf. "The risk isn't that these systems are thinking for themselves," Turner said. "It's that we're giving them access to everything." "It's like handing your laptop to a stranger on the street and hoping nothing bad happens... Any of the communications... on that device... are going to be interfaces from untrusted third parties that can... take certain actions." A compromised agent could be instructed to transfer funds, delete files, or send malicious messages without raising immediate alarms because the behavior appears legitimate. Unfortunately, the report reveals a fundamental disconnect between AI adoption and security practices. Users are being asked to give these agents broad system access, and in many cases, that has already led to data exposure, unintended actions, and loss of control. In some cases, OpenClaw takes actions beyond what users explicitly instruct, and Microsoft has since advised that it should not be run on standard personal or enterprise devices. Chinese authorities have restricted its use in office environments due to its tendency for data exposure and broader security risks. Some OpenClaw vulnerabilities allow hackers to access sensitive data, and it has been used to distribute malware through GitHub repositories. "Don't just blindly download one of these things and start using it on a system that has access to your whole personal life. Build in some separation and run some experiments of your own before you really trust the new technology to do what you want it to do," Turner said. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.
[2]
Hackers exploit vulnerabilities in OpenClaw to control 28,000 systems
Hackers are exploiting insecure AI agents called OpenClaw, compromising more than 28,000 systems worldwide. SecurityScorecard's analysis reveals that these deployments expose thousands of high-risk systems directly to the internet, with minimal protective measures in place. The report identified a total of 40,214 internet-exposed OpenClaw instances, with 28,663 unique IP addresses hosting control panels accessible globally. Approximately 63% of these deployments are vulnerable to remote code execution, which enables attackers to seize control of host machines without user interaction. Among the vulnerabilities, three high-severity Common Vulnerabilities and Exposures (CVEs) were noted, with CVSS scores ranging from 7.8 to 8.8. Public exploit code for each vulnerability is readily available, heightening the risk for unprotected systems. The findings show that 549 exposed instances correlate with previous breach activity, while 1,493 are linked to known vulnerabilities. Many exposed deployments occur within major cloud and hosting providers, highlighting repeated patterns of insecure setups. OpenClaw, previously known as Moltbot and Clawdbot, functions as a personal AI agent, managing tasks and communications for users. The issue stems from excessive permissions granted to these systems without adequate security measures. Jeremy Turner, VP of Threat Intelligence at SecurityScorecard, stated, "In practice, because it was written by AI, security wasn't a dominating feature in the development process." He emphasized the importance of careful consideration regarding integrations and permissions assigned to these AI agents. The report also found that users commonly configure the bots with identifiable personal or company names, making them attractive targets for cybercriminals. Connecting an AI agent to a platform provides that agent with specific permissions, including the ability to access emails or post content. Turner explained, "The risk isn't that these systems are thinking for themselves. It's that we're giving them access to everything." He likened this to handing a laptop to a stranger and expecting no negative consequences. Consequences of compromising an agent could include unauthorized fund transfers or the sending of malicious messages, as the behaviors appear legitimate. The ongoing imbalance between rapid AI adoption and insufficient security measures has led to data exposure and loss of control among users. OpenClaw has raised concerns, prompting Microsoft to advise against its use on standard devices. Additionally, Chinese authorities have restricted OpenClaw in office environments due to significant security risks. Some vulnerabilities allow hackers to access sensitive information and have facilitated malware distribution via GitHub. Turner urged caution, advising users not to deploy AI agents indiscriminately. "Build in some separation and run some experiments of your own before you really trust the new technology to do what you want it to do," he said.
[3]
OpenClaw trojan uses AI agents to take control of 28,000 systems
TL;DR: OpenClaw is a new AI-driven Trojan that has compromised over 28,000 systems by using autonomous AI agents to control, monitor, and extract data from infected machines. This malware represents a major cyber threat evolution, enabling attackers to manage thousands of endpoints efficiently and adaptively. A new Trojan dubbed "OpenClaw" is raising serious alarms, with researchers warning that AI agents are now being weaponized to take full control of thousands of systems. Security analysts report that OpenClaw has already compromised more than 28,000 machines, leveraging AI-driven automation to execute commands, adapt to environments, and maintain on these systems in ways that traditional malware struggles to achieve. The key concern here isn't the scale or the number of affected machines, but the infection's capability. For example, OpenClaw effectively hands attackers a semi-autonomous operator inside a system, and this operator has access to the entire machine. According to a TechRadar report, the malware uses these AI agents to dynamically interact with compromised environments. Because the AI agents have access to the entire machine, the malware can make real-time decisions and control the system. This means attackers can automate system monitoring, perform lateral movement across the layers of the system they have access to, and even conduct data extraction. "The math is simple: when you give an AI agent full access to your computer, you give that same access to anyone who can compromise it," the researchers The report outlines that AI lowers the barrier to entry while increasing efficiency, allowing a single operator to control thousands of endpoints simultaneously. The capabilities of this hack mark a significant evolution in cyber threats, where automation meets adaptability. "Don't just blindly download one of these things and start using it on a system that has access to your whole personal life. Build in some separation and run some experiments of your own before you really trust the new technology to do what you want it to do," said Jeremy Turner, VP of Threat Intelligence at SecurityScorecard Looking ahead, OpenClaw underscores a growing shift in cybersecurity. As AI becomes more integrated into offensive tooling, defenders will need to rethink detection strategies. The rise of AI-powered malware isn't theoretical anymore. It's already here, and it appears to be scaling parallel to the sophistication level of AI.
Share
Copy Link
SecurityScorecard research reveals that over 28,000 OpenClaw AI agent deployments are exposed to the internet with minimal safeguards. Approximately 63% are vulnerable to remote code execution, with three high-severity vulnerabilities already equipped with public exploit code. The findings highlight a critical gap between rapid AI adoption and security practices.
A SecurityScorecard investigation has uncovered a troubling reality about AI agents in production environments. The research identified 40,214 internet-exposed instances of OpenClaw, with 28,663 unique IP addresses hosting control panels accessible from anywhere on the internet
1
. These internet-exposed instances represent a significant attack surface, with approximately 63% of observed deployments vulnerable to remote code execution—a critical flaw that allows attackers to seize control of host machines without any user interaction2
.
Source: TechRadar
The severity of the situation becomes clearer when examining the specific security vulnerabilities involved. Three high-severity Common Vulnerabilities and Exposures affect OpenClaw, with CVSS scores ranging from 7.8 to 8.8
1
. Public exploit code is already available for all three vulnerabilities, meaning attackers don't need advanced skills to compromise exposed systems. The research also found that 549 exposed instances correlate with prior breach activity, while 1,493 are associated with known vulnerabilities that compound the risk2
.
Source: TweakTown
OpenClaw, formerly known as Moltbot and Clawdbot, markets itself as a personal AI agent capable of scheduling meetings, sending emails, and managing tasks on behalf of users
1
. The problem isn't the AI's capabilities but the excessive permissions granted to these autonomous AI agents without proper security controls. "The math is simple: when you give an AI agent full access to your computer, you give that same access to anyone who can compromise it," the researchers stated3
.Jeremy Turner, VP of Threat Intelligence at SecurityScorecard, explained that "in practice, because it was written by AI, security wasn't a dominating feature in the development process"
1
. This lack of robust security practices has created a situation where AI agents effectively function as semi-autonomous operators inside systems, capable of making real-time decisions and controlling entire machines3
. A compromised agent could be instructed to transfer funds, delete files, or send malicious messages without raising immediate alarms because the behavior appears legitimate.The exposed deployments are heavily concentrated in major cloud and hosting providers, indicating repeatable and easily replicated insecure deployment patterns
1
. Many users configure these AI agents with personal names and company names, revealing exactly who is using these tools and making them attractive targets for attackers2
. When users connect an AI agent to a platform, they give it an identity with specific permissions that may include posting content, accessing email, reading files, or interacting with other systems.Turner emphasized the scale of the risk: "The risk isn't that these systems are thinking for themselves. It's that we're giving them access to everything"
2
. He compared this to handing your laptop to a stranger on the street and hoping nothing bad happens. The AI-powered malware capabilities enable attackers to automate system monitoring, perform lateral movement across system layers, and conduct data extraction efficiently3
.Related Stories
The severity of OpenClaw's security vulnerabilities has prompted responses from major technology companies and governments. Microsoft has advised that OpenClaw should not be run on standard personal or enterprise devices, while Chinese authorities have restricted its use in office environments due to its tendency for data exposure and broader security risks
1
. Some vulnerabilities allow hackers to access sensitive data, and OpenClaw has been used to distribute malware through GitHub repositories.Looking ahead, the OpenClaw situation underscores a fundamental disconnect between AI adoption and security practices. Turner advised users to exercise caution: "Don't just blindly download one of these things and start using it on a system that has access to your whole personal life. Build in some separation and run some experiments of your own before you really trust the new technology to do what you want it to do"
1
. The rise of AI-powered malware marks a significant evolution in cyber threats, where automation meets adaptability. As AI becomes more integrated into offensive tooling, defenders will need to rethink detection strategies to address this growing threat landscape3
.Summarized by
Navi
[1]
04 Feb 2026•Technology

30 Mar 2026•Technology

09 Mar 2026•Technology

1
Business and Economy

2
Technology

3
Technology
