7 Sources
7 Sources
[1]
Here's why it's prudent for OpenClaw users to assume compromise
For more than a month, security practitioners have been warning about the perils of using OpenClaw, the viral AI agentic tool that has taken the development community by storm. A recently fixed vulnerability provides an object lesson for why. OpenClaw, which was introduced in November and now boasts 347,000 stars on Github, by design takes control of a user's computer and interacts with other apps and platforms to assist with a host of tasks, including organizing files, doing research, and shopping online. To be useful, it needs access -- and lots of it -- to as many resources as possible. Telegram, Discord, Slack, local and shared network files, accounts, and logged in sessions are only some of the intended resources. Once the access is given, OpenClaw is designed to act precisely as the user would, with the same broad permissions and capabilities. Severe impact Earlier this week, OpenClaw developers released security patches for three high-severity vulnerabilities. The severity rating of one in particular, CVE-2026-33579, is rated from 8.1 to 9.8 out of a possible 10 depending on the metric used -- and for good reason. It allows anyone with pairing privileges (the lowest-level permission) to gain administrative status. With that, the attacker has control of whatever resources the OpenClaw instance does. "The practical impact is severe," researchers from AI app-builder Blink wrote. "An attacker who already holds operator.pairing scope -- the lowest meaningful permission in an OpenClaw deployment -- can silently approve device pairing requests that ask for operator.admin scope. Once that approval goes through, the attacking device holds full administrative access to the OpenClaw instance. No secondary exploit is needed. No user interaction is required beyond the initial pairing step." The post continued: "For organizations running OpenClaw as a company-wide AI agent platform, a compromised operator.admin device can read all connected data sources, exfiltrate credentials stored in the agent's skill environment, execute arbitrary tool calls, and pivot to other connected services. The word 'privilege escalation' undersells this: the outcome is full instance takeover." While fixed, the vulnerability means that thousands of instances may have been compromised without users having the slightest idea. Ever since OpenClaw rose to viral sensation, security professionals have warned of the dangers that come from an LLM -- by its very nature unreliable and prone to the most basic of mistakes -- gaining access to such a vast number of sensitive resources and acting autonomously. Earlier this year, a Meta executive said he told his team to keep OpenClaw off their work laptops or risk being fired. The executive said the unpredictability of the tool could lead to breaches in otherwise secure environments. Other managers have issued the same mandate. Security researchers, too, have issued warnings. A widely circulating Reddit post Friday carried the title "If you're running OpenClaw, you probably got hacked in the last week." It reasoned that the patches dropped on Sunday but didn't receive a formal CVE listing until Tuesday. That means that alert attackers had a two-day headstart to exploit before most OpenClaw users would have known to patch. Making the chances of active exploitation more likely, Blink said that 63 percent of the 135,000 OpenClaw instances found exposed to the Internet in a scan earlier this year were running without authentication. The result is that attackers already had the pairing privileges required to gain administrative control with no credentials required. "On these deployments, any network visitor can request pairing access and obtain operator.pairing scope without providing a username or password," Blink said. "The authentication gate that is supposed to slow down CVE-2026-33579 does not exist." The vulnerability stems from the failure of OpenClaw to invoke any authentication during the request for administrative-level pairing. The core approval function -- src/infra/device-pairing.ts -- didn't examine the security permissions of the approving party to check if they have the privileges required to grant the request. As long as the pairing request was well-formed it was approved. The guidance to assume compromise is well-founded. Anyone who runs OpenClaw should carefully inspect all /pair approval events listed in activity logs over the last week. Beyond that, users should reconsider their use of OpenClaw altogether. Whatever efficiency may be gained from using the tool could easily be undone in the event a threat actor obtains the keys to a network kingdom.
[2]
Claws: From AI Generation to AI Execution
Barbara is a tech writer specializing in AI and emerging technologies. With a background as a systems librarian in software development, she brings a unique perspective to her reporting. Having lived in the USA and Ireland, Barbara now resides in Croatia. She covers the latest in artificial intelligence and tech innovations. Her work draws on years of experience in tech and other fields, blending technical know-how with a passion for how technology shapes our world. "First there was chat, then there was code, now there is claw," AI researcher Andrej Karpaty posted on X in February. The AI lexicon continues to expand, with claws now a new layer on top of AI agents, and it all began with OpenClaw. OpenClaw -- which went through short-lived iterations as Clawdbot and Moltbot -- is an open-source AI agent designed to execute tasks autonomously across your most-used apps and services. "Every company in the world today needs to have an OpenClaw strategy, an agentic system strategy," Nvidia CEO Jensen Huang said during the 2026 GTC conference in San Jose in March, calling it "the new computer." OpenClaw started the trend, but "claw" is now a category in its own right. And multiple companies now sell, ship or wrap their own versions of agents. So what exactly is a claw, and why is everyone from solo hackers to Silicon Valley giants obsessed with "raising lobsters"? Let's explore. A claw is an AI agent that can actually do things on a computer, not just talk about doing them. You give it a goal, it breaks the goal into smaller steps, then it uses tools, like a web browser, a terminal or your apps, to carry out those steps. The name comes from the idea of "clawing" into your system -- having the hands, or claws, to actually grab files, run terminal commands and control your mouse. Every claw is an agent, but not every agent is a claw. While a standard AI agent waits for you to type a prompt, a claw can wake itself up at 3 a.m. because it noticed an urgent email from your boss and decided to draft a response based on a spreadsheet it found in your Downloads folder. That sounds like a fancy way of saying automation. The difference is that a claw doesn't need you to script every move. It can plan on the fly and react when something changes. It remembers what you asked for and what already happened, so it doesn't reset after every prompt. Claws also have guardrails, or they should have, so they don't do something destructive when a model makes a bad call. "These agents are general-purpose computer agents," Gavriel Cohen, creator of NanoClaw and CEO of NanoCo, tells CNET. "Anything that a person can do with a computer, an agent can do." Cohen says there is a lot of value to unlock with these agents, calling them powerful. He says Peter Steinberger, the creator of OpenClaw, connected the model to other tools in a way that made it "YOLO mode -- do anything." Unlike agent mode in AI browsers, claws are not tied to a browser window or a single dashboard. If run locally on your machine, a claw connects to your computer through a terminal, giving it access to your files, apps and system controls. But you usually don't talk to it there. You message it through apps like WhatsApp, Telegram, Discord, Slack or iMessage, turning those chat apps into a remote control for your computer. Google has also started making this easier. Connecting a claw to Google Workspace used to mean stitching together multiple APIs and workarounds. Google's release of the Google Workspace CLI gives developers a more direct path into tools like Gmail and Drive. While Google warns that this is a developer tool and not an officially supported product for the average user, it shows that big platforms are starting to embrace the claw ecosystem. More claws are also moving to the cloud. A local claw runs on your own device, while a cloud-hosted claw runs on remote servers, which means it can stay active around the clock and keep working even when your computer is off. That makes it more useful for background jobs, but it also means giving up some control. Despite the hype, these are still not tools for non-technical users. If you aren't comfortable working in a terminal, you shouldn't be running one on your own. Another big part of claw operations is skills. These are reusable add-ons, connectors and plug-ins that expand what a claw can do. OpenClaw helped popularize that model and points users to a community skill registry called ClawHub. Over time, those skills marketplaces could start to look more like app stores, where people download capabilities as needed. Cohen says there will be marketplaces for skills, and organizations will create them because "that's where a lot of their value is going to be accrued." OpenClaw kicked off the current claw wave, but it didn't stay solo for long. Once the idea caught on, big platforms and smaller developer teams rushed to build their own versions, either by forking OpenClaw, adding more controls or rebuilding parts of it for a different setup. This is a community-led project that runs locally with deep system access, which is why it feels both powerful and risky. Because it is open source, you can inspect the code and build new skills, but it still takes technical know-how to set up safely. Announced at GTC 2026, NemoClaw is Nvidia's security-focused OpenClaw stack. It adds privacy and policy guardrails around OpenClaw to make autonomous agents less risky in enterprise settings. Acquired by Meta in late 2025, Manus recently launched a desktop app that runs instructions directly in your terminal. My Computer is a claw-like desktop agent, not a claw per se, and it allows the agent to manage local files and apps, bridging the gap between a cloud assistant and a full desktop controller. Claude Cowork is Anthropic's clearest entry in the claw category. It runs locally on your computer in an isolated virtual machine, giving the agent access to local files and integrations while keeping the setup more contained than a raw OpenClaw install. Its Dispatch feature lets you assign a task on your desktop and walk away, then check progress or provide mid-task guidance on your phone. Perplexity's Computer is more claw-adjacent than classic claw. It runs in a fully sandboxed cloud environment with its own isolated browser and filesystem, so the agent stays off your personal machine. NanoClaw goes in the opposite direction from bigger, all-in-one systems. It stays small, boxed-in and easier to inspect. That makes it more appealing to developers who want tighter control over what the agent can access. Cohen tells CNET the team kept it minimal on purpose, which limits what it can do out of the box but makes it easier to customize. Then there are tiny claw variants built for low-power devices. Projects like PicoClaw, ZeroClaw and MimiClaw aim to run on minimal hardware, bringing claw-style automation to cheaper hardware. They're early, but they hint at where this could go next. In its perpetual AI race with America, several China-based tech firms rolled out their own versions and integrations. Tencent added a ClawBot plug-in to WeChat. ByteDance launched ByteClaw for employees, built on Volcano Engine's ArkClaw enterprise version. Alibaba rolled out JVS Claw, a mobile app designed to simplify deploying OpenClaw for non-coders. Xiaomi has been testing miclaw, a system-level agent for Xiaomi phones and smart home devices. The claw hype moved so fast that people reportedly paid for help installing OpenClaw, and later paid again to have it removed as security worries spread. The risk of giving an AI root access to your computer is a massive security gamble. If a claw can read your emails, then a hacker who tricks that claw can read them too. Security researchers have warned about OpenClaw's compromised skills on ClawHub and the broader risk of over-privileged agent setups. Even without an attacker, the model can make mistakes. "You need to think about the agent as potentially malicious," Cohen tells CNET. He says an agent can be thrown off by prompt injection or just hallucinate a bad loop and delete all your emails. "You can't trust agents just by giving them instructions to never delete the database. They can drop the database by accident anyway." This is exactly what happened to Meta's director of AI alignment, Summer Yue, who recently gave OpenClaw access to her email with explicit orders not to act without approval. The agent ignored her, began mass-deleting her inbox and wouldn't stop until she ran to her computer to kill the process. The safer approach is to avoid handing an agent unlimited access in the first place. Instead, credentials should stay outside the agent itself, with rules that control what each agent can do. "It's not binary," Cohen tells CNET. Rather than choosing between full access or none, you should be able to limit actions, like letting an agent read emails but not delete them. Cohen says the goal is "limiting the blast radius," so a mistake or prompt injection can only cause limited damage. That's why the conversation keeps circling back to sandboxes, permissions and human-in-the-loop approvals for risky steps. This tension between power and safety is currently the biggest controversy in the industry. "Warning is good, scaring is less good, because this technology is important to us," Huang said on the All-In Podcast at GTC. Despite the risks, the benefits are hard to ignore. A claw can automate the digital chores that take up much of your day, like my personal nemesis -- clearing up my overflowing inbox. They can cut out the tasks of knowledge jobs like pulling info from three places, formatting it, updating a spreadsheet, opening a ticket and doing it again tomorrow. Cohen advises to use multiple claws, not one super-claw: "The agent that browses the internet and does research shouldn't be the same one that's handling your financial data." In the near future, you won't use AI because your computer will simply be AI. Your operating system will be a collection of specialized claws working together in the background. "I think it's in the next six months that everybody's gonna have a personal assistant that brings massive value to them and helps them accomplish their goals and manage their time," Cohen tells CNET. He thinks every employee will have an AI assistant that can handle parts of their job, while teams will oversee groups of agents. Six months seems a little soon, but with the breakneck speed AI is moving at, we'll have to wait and see.
[3]
Don't deploy OpenClaw without securing it - Try this opensource solution and hands-on lab
OpenClaw becomes powerful the moment it can connect a model to tools, skills, MCP servers, and a live workspace. That is also the moment security stops being optional. If you are evaluating OpenClaw, or planning to run it in front of real tools and data, the first question should not just be what the agent can do. The first question should be what happens if it trusts the wrong component. What OpenClaw Actually Changes OpenClaw is useful because it helps AI agents do more than answer isolated prompts. It can: * Connect to skills * Use MCP servers * Call tools and services * Work with files and a workspace * Generate code that lands in the environment That makes OpenClaw more capable. It also creates more trust boundaries. When an agent can install helpers, call external tools, and act on a live workspace, the risk is no longer limited to bad text generation. Now the system has to decide what gets trusted, what gets executed, what reaches the model, and what code gets written into the environment. Why OpenClaw Security Matters This is not just a hypothetical design concern. Koi Security's audit of 2,857 ClawHub skills found 341 malicious entries, or 11.9%. A published arXiv study found that 26.1% of analyzed skills had at least one vulnerability. The same study reported 13.3% with data-exfiltration patterns and 11.8% with privilege-escalation patterns. Those numbers do not mean every OpenClaw skill is malicious. They do mean something more practical: there is already enough risky behavior in the ecosystem that OpenClaw should not be run without security controls in front of it. One bad skill with file-read permissions and a live workspace can be enough to expose data, run risky commands, or damage the environment. Read more stats on this overview page. What DefenseClaw Provides DefenseClaw is free, open-source security solution for OpenClaw. It adds checks before install and while the system is running. It provides protection through four capability areas/engines: If you want to see technical details, you can review the full diagram. The live demo has examples that explain what each engine does. 1. Guardrails The guardrail flow shows how risky prompts and poisoned content can change model behavior once the model is connected to a real workflow. In the demo, a poisoned note or privacy-style request pushes the model toward an unsafe path. DefenseClaw inspects that traffic and blocks the unsafe outcome before it reaches the protected model path. 2. Tool Inspection The MCP section is one of the clearest parts of the walkthrough. It shows how a malicious MCP path can try to: * read synthetic AWS credentials * run a host command * fetch internal configuration In the protected path, those tool requests are blocked by policy before they reach the final tool outcome. 3. Install Scanning Security has to start before trust. The demo shows what happens when OpenClaw is asked to accept: * a malicious skill * an unsafe MCP server DefenseClaw scans those components before they are trusted and can reject or quarantine them before they become part of the workflow. 4. CodeGuard The final path focuses on agent-written code. That matters because even when a prompt or tool call looks harmless, the next step may be code generation that lands in the workspace. The demo makes that concrete with examples such as: * shell execution * embedded private key material * unsafe SQL construction DefenseClaw scans those patterns before the file write lands. OpenClaw Security Lab OpenClaw security lab is a hands-on walkthrough where you set up your own OpenClaw environment, test malicious skills, unsafe MCP servers, prompt attacks, and risky code paths, then apply DefenseClaw to inspect or block them before they cause harm. You can also use it as a best-practice reference for deploying DefenseClaw and securing your own environment. Start the lab here: OpenClaw Security hands-on lab If you want more, try all the hands-on labs in the AI Security Learning Journey at cs.co/aj. Have fun exploring the labs, and feel free to reach out if you have questions or feedback.
[4]
How to safely experiment with OpenClaw
OpenClaw is one of the fastest-growing open-source projects in history, and it's easy to see why. Connect it to your messaging apps, give it access to your email and calendar, and you have an AI agent that actually does things around the clock instead of just answering questions. For IT managers, operations leads, and developers exploring automation, that's a compelling pitch. The catch is that OpenClaw's power comes directly from the permissions you give it. Set it up carelessly, and you're handing an AI agent root access to your machine, your credentials, and potentially your company's data. With the right approach, though, you can explore what it can do without taking on unnecessary risk. How does OpenClaw work? OpenClaw is a self-hosted agent runtime that acts as a personal AI assistant running on your own machine. It's a long-running Node.js service that connects chat platforms like WhatsApp and Discord to an AI agent capable of executing real-world tasks. You interact with it through messaging apps you already use, and it acts on your behalf: browsing the web, managing files, running scripts, and calling external APIs. The agent is model-agnostic. You can connect it to Claude, GPT, DeepSeek, or a locally hosted model using your own API keys. Its capabilities come from "skills," which are extensions that let the agent interact with browsers, file systems, messaging apps, and productivity tools. Some installations ship with over 100 prebuilt skills, and developers can add their own. The architecture is deliberately simple. Persistent memory is stored as Markdown files on disk, so you can view and edit the agent's notes directly. It also runs on a schedule. It can check your inbox each morning, flag anything urgent, and keep working on longer tasks while you're away. Is it safe to use OpenClaw? In its default state, no. OpenClaw requires access to email accounts, calendars, messaging platforms, and system-level commands, which creates a wide attack surface. A Kaspersky security audit from early 2026 identified 512 vulnerabilities, eight of them critical. Researchers around the same time found nearly a thousand publicly accessible OpenClaw installations running with no authentication at all. The most persistent risk is prompt injection. Every email, message, and webpage your agent reads is a potential attack vector. A malicious actor can embed instructions inside content the agent processes, tricking it into leaking credentials or executing commands you never authorized. This isn't a fringe concern; it's architecturally baked in, and the project's own creator has acknowledged it as an unsolved problem. The skills marketplace adds another layer of risk. Bitdefender found that around 20% of ClawHub skills were malicious. Installing a skill is essentially installing privileged code, and unverified skills have been linked to credential theft and data exfiltration. A critical vulnerability from early 2026, CVE-2026-25253, enabled one-click remote code execution via WebSocket token theft, and researchers found over 17,500 internet-exposed instances affected before it was patched. Even with individual vulnerabilities addressed, the underlying architecture keeps the risk real. Broad permissions, external content ingestion, and a public skills marketplace are features, not bugs, and they require ongoing attention rather than a one-time fix. Yet none of this puts OpenClaw out of reach. We've seen developers run it securely using isolated environments, scoped credentials, and active monitoring. The way you deploy it is what determines whether experimenting with OpenClaw is a manageable risk or an open door. How to use OpenClaw safely Running OpenClaw on your primary laptop with full system access is a very different proposition from running it in a sandboxed container on a dedicated machine with tightly scoped credentials. The deployment choices you make upfront shape almost every other risk factor, so it's worth getting those right before you do anything else. Choosing a deployment environment Your first decision is where OpenClaw actually runs. Each option offers a different tradeoff between convenience and isolation. Dedicated hardware If you want to experiment on physical hardware, use a spare machine, not your primary laptop or a work device. A dedicated Mac Mini or Raspberry Pi keeps the agent off machines that hold sensitive data and makes it straightforward to wipe and rebuild if something goes wrong. Docker containers Docker is a good option for developers who want isolated, reproducible setups. Configure it to run OpenClaw as a non-root user, use a read-only root filesystem, drop all Linux capabilities, and bind the gateway port to 127.0.0.1 so it's only accessible from the host or over an SSH tunnel. Mount only the directories the agent actually needs. VPS hosting VPS servers add network isolation that's hard to replicate on a local machine. Hostinger's Docker-based OpenClaw deployment automatically assigns a random port and enables gateway authentication. DigitalOcean offers a similar hardened image that removes two common configuration mistakes. Both are reasonable starting points, but they still need the additional hardening steps below. Locking down network access Keep the gateway off the public internet. Bind it to localhost or a private network, use a firewall, and access it remotely over a VPN like Tailscale. OpenClaw's gateway runs on port 18789 by default, and leaving that exposed is one of the most common misconfigurations we've seen documented in the wild. If you're running OpenClaw in Docker, note that Docker has its own forwarding chains that bypass standard host firewall rules. Route your rules through the DOCKER-USER chain to ensure they apply. On shared networks, also consider disabling mDNS broadcasting: the gateway advertises its presence with TXT records that can expose filesystem paths and hostname details to anyone else on the network. Credentials and permissions Never connect OpenClaw to your primary accounts. Create dedicated accounts for any messaging apps or services you link to it, use separate API keys per service, and set spending limits where your provider allows. Store credentials in environment variables rather than plain-text config files, and restrict file permissions so sensitive files are only readable by the OpenClaw process owner. Apply the same restraint to skills. Only enable what OpenClaw genuinely needs for the task at hand, and review the source code of any ClawHub skill before installing it. Given that roughly one in five skills in the marketplace has been found to be malicious, treating it as untrusted by default is the safer starting position. Sandbox mode and tool policy Enable sandbox mode. Without it, commands execute with far fewer restrictions, which significantly widens what a successful prompt injection could do. If you're using Docker, also disable external network access for sandboxed tasks unless you have a specific reason to allow it. On top of sandboxing, configure a restrictive tool policy. Block dangerous commands by default, use allowlists rather than denylists where possible, and for anything that touches production systems or sensitive data, require explicit human approval before the agent acts. Ongoing oversight Safe deployment isn't a one-time setup. Enable session and action logging from the start, so you have a record of what the agent executes, when, and why. Review logs regularly, particularly in the early stages when you're still getting a feel for what normal behavior looks like. Keep OpenClaw updated, watch the project's security advisories, and run an OpenClaw security audit after any configuration change or change to your network setup. If the logs show something unexpected, take it seriously. The agent has access to your credentials and files, and catching anomalies early is much easier than investigating after the fact.
[5]
DefenseClaw is Live!
Last week, DJ wrote about why OpenClaw - the agent he uses to help run his family' life needs a governance layer. He pointed to ClawHavoc, 135K exposed instances, and the growing gap between how powerful OpenClaw is and how little anyone was doing to secure it. That gap is exactly why we built DefenseClaw. DefenseClaw is now live on GitHub. It is open source, ready to install, and built to bring governance, enforcement and observability to OpenClaw. You already know why this matters. This post will cover what you can do about it. DefenseClaw is the operational governance layer that was missing from the stack. NVIDIA provided the sandbox foundation with OpenShell. The Cisco AI Defense team open sourced the scanners. DefenseClaw brings them together into one governed loop - so the security decisions happen automatically. When you install a skill, plugin or MCP through DefenseClaw CLI, it gets scanned before it is allowed into your environment. But we don't assume everything will go through CLI, so it continuously monitors the relevant directories for any changes - where it's a manually added plugin, a copied skill or something pulled by another process. Critical and high-severity findings can trigger enforcement actions, and every event is logged. Scanning at install time isn't enough. A prompt injection attack from your email connected to your OpenClaw could compromise your system or result in leakage of your personal information. So, we built an inspection engine that sits in the execution loop as a OpenClaw plugin - LLM prompts, completions, and tool invocations get checked in real time for injection attacks, data exfiltration and common-and-control patterns. We also built CodeGuard to scan code that the agent writes. Every file the claw generates, or edits gets checked for hardcoded secrets, command injection, unsafe deserialization, and bunch of other patterns. If your agent writes eval(input) into a file, CodeGuard catches it before it hits the filesystem. You can start in monitor mode where everything is logged, and nothing is blocked then switch over to action mode for real time protection. We enforce protection at the system boundary so that even in a failure scenario the impact is contained. At the infrastructure layer, OpenShell acts as the outer guardrail governing the network and file system i/o, ensuring that even if your OpenClaw is compromised, it cannot freely reach external systems or modify sensitive files. Every scan result, block decision, tool call, alert - it all streams as structured events from the moment you start. We ship with a one-command Splunk setup locally or in Splunk observability cloud (o11y). This gives you a local Splunk instance with a purpose-built DefenseClaw app - dashboard, saved searches, investigation workflows all pre-wired. If your claw does something, there's a record with full observability. curl -LsSf https://raw.githubusercontent.com/cisco-ai-defense/defenseclaw/main/scripts/install.sh | bash defenseclaw init -enable-guardrail To make it even easier to get started, we have also published an OpenClaw security learning lab so you can see how it works and start experimenting right away. DefenseClaw is shipping as a fully functional governance layer. Native support for other Agents like ClaudeCode, OpenCode, ZeroClaw, Codex, etc., are coming very soon, besides numerous other features and capabilities.
[6]
What is OpenClaw? Agentic AI that can automate any task
You've probably used an AI powered toools to draft an email or summarize a document. But what if your AI assistant could actually send that email, organize your inbox, and schedule the follow-up call while you're making coffee? That's the gap OpenClaw is designed to fill. Professionals dealing with repetitive digital workflows are paying close attention to this tool. If you manage calendars, chase leads, handle customer messages, or juggle a dozen browser tabs at once, OpenClaw promises to hand those tasks to an AI that can carry them through from start to finish, not just tell you how to do them yourself. What is OpenClaw (aka Moltbot or Clawdbot)? OpenClaw is an open-source AI agent that runs on your own hardware and connects large language models (LLMs) like Claude or ChatGPT to the software and services you use every day. Unlike a chatbot, it doesn't stop at generating a response. It can take actions: reading and writing files, sending messages, browsing the web, executing scripts, and calling external APIs, all through familiar messaging apps like WhatsApp, Telegram, or Slack. The project was created by Austrian developer Peter Steinberger, founder of PSPDFKit. It's built around a local "Gateway" process that acts as the control plane, sitting between your messaging apps and the AI model, routing instructions and executing tasks. Think of it as giving your AI a pair of hands and a persistent memory, rather than just a voice. The LLM provides the reasoning; OpenClaw provides the infrastructure to act on it. What makes it stand out from managed AI platforms is the degree of control it gives you. Your conversation history, session state, and tool execution all stay on your own infrastructure. The only calls going out are to your chosen LLM provider's API. Launch, virality, and early reception OpenClaw started life in November 2025 under the name Clawdbot. It was renamed Moltbot in January 2026 following a trademark complaint from Anthropic, and then rebranded again as OpenClaw three days later. Within weeks of that final rename, the project passed 100,000 GitHub stars and became one of the most-discussed tools across developer communities on Reddit, LinkedIn, and X. The viral moment was partly driven by the Moltbook project, an experimental platform where OpenClaw agents can interact with each other rather than with human users. Nvidia CEO Jensen Huang said at GTC 2026 that OpenClaw was "probably the single most important release of software, you know, probably ever," drawing comparisons to the long-term impact of Linux. Sam Altman was taken enough by the project to hire Steinberger directly and announced in February 2026 that OpenClaw would move to an open-source foundation. Current state, what's next, pros and cons OpenClaw has surpassed 250,000 GitHub stars, moving past React as the most-starred non-aggregator project on the platform. Steinberger is now at OpenAI, and the project is governed by an independent open-source foundation. Enterprise adoption is growing, with Nvidia reportedly running OpenClaw instances across its internal teams for tasks ranging from tooling development to code writing. That said, the platform is still maturing. Security researchers at Cisco, Gartner, and Trend Micro have all flagged real risks around how the tool is deployed by default, and the skill marketplace has seen supply chain abuse. We'll cover both in more detail below. How does OpenClaw work? At its core, OpenClaw is a local orchestration layer. You install it on your machine or a server, and it runs a background process called the Gateway, which listens on port 18789 by default. When you send a message through WhatsApp, Telegram, or another connected channel, the Gateway receives it, normalizes it, and routes it to the right agent session. The agent runtime then loads context from a set of plain-text workspace files. These include SOUL.md, which defines how the agent behaves; AGENTS.md, which describes its role; and TOOLS.md, which governs what it can do. It also searches a local memory folder for anything relevant from past conversations. All of this gets compiled into a system prompt and sent to your chosen LLM. The LLM reads the full context and decides what to do next. If the task just needs a reply, it writes one. If it needs to take an action, it requests a tool call. The agent runtime intercepts that request and executes it directly: running a shell command, opening a browser, reading or writing a file, or calling an API. The response then streams back to you through the original messaging channel. Memory is file-based and local. Because everything is written to files on your machine rather than a remote server, OpenClaw sessions persist across restarts. Tell it about a project on Monday, and it still knows about it on Friday. You can also configure multiple agents with different personalities, tools, and permissions, all running through a single Gateway process. Capabilities are extended through Skills, which are Markdown instruction files stored in the workspace. Over 100 built-in skills are available, covering things like calendar management, email, browser automation, and CRM integration. The community maintains a registry where you can find and contribute additional skills. OpenClaw only injects the skills relevant to each specific request, rather than loading everything at once, which keeps the system prompt manageable and the LLM responses focused. OpenClaw use cases OpenClaw is model-agnostic, self-hosted, and highly configurable, which makes it flexible enough to cover a wide range of workflows. Businesses using it have reported particular value in automating lead-generation pipelines and cutting down on manual admin work. Some of the most common practical use cases include: * Inbox and calendar management: Sorting messages, drafting replies, scheduling meetings based on context * Lead generation workflows: Prospect research, website auditing, and pushing results into a CRM * Morning briefings: Pulling together news, tasks, and notifications into a daily summary * Content pipelines: Drafting, reviewing, and publishing content from a single prompt * Code review assistance: Integrating with developer tools to summarize pull requests or flag issues * Internal tooling: Building lightweight custom tools for tasks that don't justify a full software project * File and data management: Organizing, searching, and transforming documents across folders * API automation: Connecting to third-party services without writing custom integration code How to use OpenClaw OpenClaw is a Node.js application. As a prerequisite, you'll need Node.js 22 or later installed on your OS before you start. From there, the quickest path is running the official onboarding wizard via a terminal command, which walks you through the process of connecting an LLM, linking your first messaging channel, and installing a background service to keep it running around the clock. It works on macOS, Linux, and Windows (with WSL2 recommended for Windows users). If you want a safer managed deployment, VPS providers, including Hostinger and DigitalOcean, offer one-click OpenClaw instances with hardened security images. That means you no longer need to handle server provisioning yourself, not to mention you get to keep it separate from your local network. Red Hat AI also offers an enterprise deployment path through OpenShift, which adds role-based access controls without modifying the agent's code. However, you should know that execution-focused agentic AI tools like OpenClaw are not secure by default. Because the Gateway requires broad permissions to work effectively, a misconfigured or publicly exposed instance can be exploited. Security firm Acronis found that a honeypot mimicking an OpenClaw gateway attracted exploitation attempts within minutes of going live. Keep the Gateway behind a loopback or trusted private network, manage API keys carefully, and vet any third-party skills before installing them from the community registry.
[7]
CertiK: OpenClaw AI Agent Puts Crypto Wallets at Risk
CertiK has advised ordinary users "who are not security professionals, developers, or experienced geeks" against installing and using OpenClaw. The widespread integration of AI assistants such as OpenClaw introduces critical security risks that open up users to unauthorized actions, data exposure, system compromises and drained crypto wallets, according to cybersecurity firm CertiK. OpenClaw is a self-hosted AI agent that integrates with messaging platforms such as WhatsApp, Slack, and Telegram and can autonomously take actions on users' computers, such as managing email, calendars, and files. It's estimated there are around 2 million active monthly users of the platform, according to Openclaw.vps. A McKinsey study in November revealed that 62% of survey respondents said their organizations were already experimenting with AI agents. However, CertiK warns that it has become a "primary supply chain attack vector at scale." OpenClaw grew from a side project called Clawdbot, launched in November 2025, to over 300,000 GitHub stars, a bookmarking or "like" feature on the developer platform, signaling a surge in popularity but accumulating serious "security debt" in the process, noted CertiK. However, within weeks of launch, Bitsight identified 30,000 internet-exposed instances of OpenClaw, and SecurityScorecard researchers found 135,000 instances across 82 countries, with 15,200 specifically vulnerable to remote code execution. OpenClaw has also become the most "aggressively scrutinized AI agent platform from a security standpoint," accumulating more than 280 GitHub Security Advisories, 100 Common Vulnerabilities and Exposures (CVEs), and a "string of ecosystem-level attacks" since its November launch, CertiK researchers wrote in a report shared with Cointelegraph. Because OpenClaw acts as a bridge between external inputs and local system execution, "it introduces classic attack vectors," the researchers said. These include local gateway hijacking, where malicious websites or payloads could exploit the agent's local machine presence to extract sensitive user data or execute unauthorized commands. Related: SlowMist introduces Web3 security stack for autonomous AI agents CertiK warned of the dangers of plugins, which could add channels, tools, HTTP routes, services, and providers, while malicious skills could be installed from local or marketplace sources. Unlike traditional malware, "malicious skills" can manipulate behavior through natural language, resisting conventional scanning. "Once launched, the malware can exfiltrate sensitive information such as passwords and cryptocurrency wallet credentials." Malicious backdoors may also be hidden within legitimate functional codebases, "where they fetch seemingly benign URLs that ultimately deliver shell commands or malware payloads," they added. CertiK researchers told Cointelegraph that attackers strategically seeded malicious skills across various high-value categories, "including utilities for Phantom, wallet trackers, insider-wallet finders, Polymarket tools, and Google Workspace integrations." "They cast a remarkably wide net across the crypto ecosystem, with the primary payload designed to target a large number of browser extension wallets simultaneously, such as MetaMask, Phantom, Trust Wallet, Coinbase Wallet, OKX Wallet, and many others," they said. The researchers added that there was a "clear overlap in tradecraft with the broader crypto-theft ecosystem, like social engineering, fake utility lures, credential theft, wallet-focused phishing." "These are all well-known plays from the crypto drainer playbook, and we did see them used here. OpenClaw founder Peter Steinberg, who recently joined OpenAI, said they are working on improving OpenClaw's security. "Something that we worked on for the last two months is security. So things are a lot better on that front," said Steinberg at the "ClawCon" event on Monday in Tokyo. Earlier this month, cybersecurity firm OX Security reported a phishing campaign that used fake GitHub posts and a bogus "CLAW" token to lure OpenClaw developers into connecting crypto wallets. CertiK advised ordinary users "who are not security professionals, developers, or experienced geeks," not to install and use OpenClaw from scratch but wait for "more mature, hardened, and manageable versions." Cybersecurity company SlowMist introduced a security framework for AI agents earlier in March, pitching it as a "digital fortress" to defend against risks that come with autonomous systems handling onchain actions and digital assets.
Share
Share
Copy Link
A critical vulnerability in OpenClaw, the viral AI agent tool with 347,000 GitHub stars, has security experts urging users to assume compromise. CVE-2026-33579 allows attackers with minimal permissions to gain full administrative control, potentially exposing sensitive data across thousands of unprotected instances. The incident highlights the inherent security risks of autonomous AI agent operations.
A severe OpenClaw vulnerability patched earlier this week has security practitioners warning users to assume their systems may already be compromised. CVE-2026-33579, rated between 8.1 and 9.8 out of 10 depending on the metric used, allows anyone with pairing privileges—the lowest-level permission—to silently escalate to administrative status and gain full control over whatever resources the AI agent accesses
1
.
Source: Cointelegraph
The timing of the disclosure amplified the risk. Patches dropped on Sunday but didn't receive a formal CVE listing until Tuesday, giving alert attackers a two-day window to exploit the flaw before most OpenClaw users knew to patch
1
. Researchers from AI app-builder Blink described the practical impact as severe: "A compromised operator.admin device can read all connected data sources, exfiltrate credentials stored in the agent's skill environment, execute arbitrary tool calls, and pivot to other connected services"1
.The vulnerability's impact extends far beyond the technical flaw itself. A scan earlier this year identified approximately 135,000 OpenClaw instances exposed to the Internet, with 63 percent running without authentication
1
. On these deployments, any network visitor can request pairing access and obtain operator.pairing scope without providing credentials, meaning the authentication gate that should slow down privilege escalation attacks simply doesn't exist1
.The vulnerability stems from OpenClaw's failure to invoke authentication during administrative-level pairing requests. The core approval function didn't examine security permissions of the approving party to verify they had privileges required to grant such requests. As long as the pairing request was well-formed, it was approved
1
.Beyond CVE-2026-33579, the OpenClaw ecosystem faces broader security challenges. Koi Security's audit of 2,857 ClawHub skills found 341 malicious entries, representing 11.9 percent of the marketplace
3
. A published arXiv study reported that 26.1 percent of analyzed skills had at least one vulnerability, with 13.3 percent showing data exfiltration patterns and 11.8 percent exhibiting privilege escalation patterns3
.Prompt injection vulnerabilities represent another persistent threat. Every email, message, and webpage an OpenClaw instance processes becomes a potential attack vector. Malicious actors can embed instructions inside content the AI agent reads, tricking it into leaking credentials or executing unauthorized commands
4
. A Kaspersky security audit from early 2026 identified 512 vulnerabilities in OpenClaw, eight of them critical4
.OpenClaw, which launched in November and now boasts 347,000 stars on GitHub, by design takes control of a user's computer and interacts with other apps and platforms to assist with task automation including organizing files, research, and online shopping
1
. To be useful, it needs extensive system access to resources like Telegram, Discord, Slack, local and shared network files, accounts, and logged-in sessions1
.
Source: TechRadar
"Every company in the world today needs to have an OpenClaw strategy, an agentic system strategy," NVIDIA CEO Jensen Huang said during the 2026 GTC conference in March, calling it "the new computer"
2
. Yet this power creates significant trust boundaries. When an AI agent can install helpers, call external tools, and act on a live workspace, the risk extends beyond bad text generation to actual system compromise3
.Related Stories
Cisco has released DefenseClaw, an open-source solution designed to provide a governance layer for autonomous AI agent operations
5
. The security framework adds checks before installation and during runtime through four capability areas: guardrails that inspect traffic and block unsafe outcomes, tool inspection that blocks malicious requests by policy, install scanning that rejects unsafe components before they're trusted, and CodeGuard that scans agent-written code for patterns like shell execution and embedded private keys3
.
Source: Cisco
For organizations evaluating OpenClaw, securing OpenClaw deployments starts with deployment choices. Running OpenClaw in isolated environments like Docker containers configured with non-root users, read-only root filesystems, and localhost-only binding provides better protection than installing on primary work machines
4
. Dedicated hardware or VPS hosting adds network isolation that's difficult to replicate locally4
.Anyone running OpenClaw should carefully inspect all /pair approval events listed in activity logs over the last week to identify potential compromises
1
. Earlier this year, a Meta executive told his team to keep OpenClaw off work laptops or risk termination, citing the unpredictability of the tool and potential for breaches in otherwise secure environments1
.The broader lesson extends beyond this single vulnerability. As Gavriel Cohen, creator of NanoClaw and CEO of NanoCo, notes: "These agents are general-purpose computer agents. Anything that a person can do with a computer, an agent can do"
2
. That capability demands proportional security measures, continuous monitoring through observability tools, and recognition that whatever efficiency gains come from using the tool could easily be undone if a threat actor obtains the keys to a network kingdom1
.Summarized by
Navi
[1]
[4]
[5]
04 Feb 2026•Technology

Today•Technology

03 Mar 2026•Technology

1
Policy and Regulation

2
Technology

3
Technology
