3 Sources
3 Sources
[1]
OpenClaw security fears lead Meta, other AI firms to restrict its use
Last month, Jason Grad issued a late-night warning to the 20 employees at his tech startup. "You've likely seen Clawdbot trending on X/LinkedIn. While cool, it is currently unvetted and high-risk for our environment," he wrote in a Slack message with a red siren emoji. "Please keep Clawdbot off all company hardware and away from work-linked accounts." Grad isn't the only tech executive who has raised concerns to staff about the experimental agentic AI tool, which was briefly known as MoltBot and is now named OpenClaw. A Meta executive says he recently told his team to keep OpenClaw off their regular work laptops or risk losing their jobs. The executive told reporters he believes the software is unpredictable and could lead to a privacy breach if used in otherwise secure environments. He spoke on the condition of anonymity to speak frankly. Peter Steinberger, OpenClaw's solo founder, launched it as a free, open source tool last November. But its popularity surged last month as other coders contributed features and began sharing their experiences using it on social media. Last week, Steinberger joined ChatGPT developer OpenAI, which says it will keep OpenClaw open source and support it through a foundation. OpenClaw requires basic software engineering knowledge to set up. After that, it only needs limited direction to take control of a user's computer and interact with other apps to assist with tasks such as organizing files, conducting web research, and shopping online. Some cybersecurity professionals have publicly urged companies to take measures to strictly control how their workforces use OpenClaw. And the recent bans show how companies are moving quickly to ensure security is prioritized ahead of their desire to experiment with emerging AI technologies. "Our policy is, 'mitigate first, investigate second' when we come across anything that could be harmful to our company, users, or clients," says Grad, who is cofounder and CEO of Massive, which provides Internet proxy tools to millions of users and businesses. His warning to staff went out on January 26, before any of his employees had installed OpenClaw, he says. At another tech company, Valere, which works on software for organizations including Johns Hopkins University, an employee posted about OpenClaw on January 29 on an internal Slack channel for sharing new tech to potentially try out. The company's president quickly responded that use of OpenClaw was strictly banned, Valere CEO Guy Pistone tells WIRED. "If it got access to one of our developer's machines, it could get access to our cloud services and our clients' sensitive information, including credit card information and GitHub codebases," Pistone says. "It's pretty good at cleaning up some of its actions, which also scares me." A week later, Pistone did allow Valere's research team to run OpenClaw on an employee's old computer. The goal was to identify flaws in the software and potential fixes to make it more secure. The research team later advised limiting who can give orders to OpenClaw and exposing it to the Internet only with a password in place for its control panel to prevent unwanted access. In a report shared with WIRED, the Valere researchers added that users have to "accept that the bot can be tricked." For instance, if OpenClaw is set up to summarize a user's email, a hacker could send a malicious email to the person instructing the AI to share copies of files on the person's computer. But Pistone is confident that safeguards can be put in place to make OpenClaw more secure. He has given a team at Valere 60 days to investigate. "If we don't think we can do it in a reasonable time, we'll forgo it," he says. "Whoever figures out how to make it secure for businesses is definitely going to have a winner." Some companies concerned about OpenClaw are choosing to trust the cybersecurity protections they already have in place rather than introduce a formal or one-off ban. A CEO of a major software company says only about 15 programs are allowed on corporate devices. Anything else should be automatically blocked, says the executive, who spoke on the condition of anonymity to discuss internal security protocols. He says that while OpenClaw is innovative, he doubts that it will find a way to operate on the company's network undetected. Jan-Joost den Brinker, chief technology officer at Prague-based compliance software developer Dubrink, says he bought a dedicated machine not connected to company systems or accounts that employees can use to play around with OpenClaw. "We aren't solving business problems with OpenClaw at the moment," he says. Massive, the web proxy company, is cautiously exploring OpenClaw's commercial possibilities. Grad says it tested the AI tool on isolated machines in the cloud and then, last week, released ClawPod, a way for OpenClaw agents to use Massive's services to browse the web. While OpenClaw is still not welcome on Massive's systems without protections in place, the allure of the new technology and its moneymaking potential was too great to ignore. OpenClaw "might be a glimpse into the future. That's why we're building for it," Grad says. This story originally appeared on wired.com.
[2]
The AI security nightmare is here and it looks suspiciously like lobster
A hacker tricked a popular AI coding tool into installing OpenClaw -- the viral, open-source AI agent OpenClaw that "actually does things" -- absolutely everywhere. Funny as a stunt, but a sign of what to come as more and more people let autonomous software use their computers on their behalf. The hacker took advantage of a vulnerability in Cline, an open-source AI coding agent popular among developers, that security researcher Adnan Khan had surfaced just days earlier as a proof of concept. Simply put, Cline's workflow used Anthropic's Claude, which could be fed sneaky instructions and made to do things that it shouldn't, a technique known as a prompt injection. The hacker used their access to slip through instructions to automatically install software on users' computers. They could have installed anything, but they opted for OpenClaw. Fortunately, the agents were not activated upon installation, or this would have been a very different story. It's a sign of how quickly things can unravel when AI agents are given control over our computers. They may look like clever wordplay -- one group wooed chatbots into committing crimes with poetry -- but in a world of increasingly autonomous software, prompt injections are massive security risks that are very difficult to defend against. Acknowledging this, some companies instead lock down what AI tools can do if they're hijacked. OpenAI, for example, recently introduced a new Lockdown Mode for ChatGPT preventing it from giving your data away. Obviously, protecting against prompt injections is harder if you ignore the researchers who privately flag flaws to you. Khan said he warned Cline about the vulnerability weeks before publishing his findings. The exploit was only fixed after he called them out publicly.
[3]
This viral AI tool is the future. Don't install it yet
It lives on your devices, works 24/7, makes its own decisions, and has access to your most sensitive files. Think twice before setting OpenClaw loose on your system. A month ago, practically no one had heard about Peter Steinberger's personal AI side project. Now it's taken the AI world by storm, and it just got the backing of none other than OpenAI itself. First known as Clawdbot and later as Moltbot, the now re-rebranded OpenClaw served as an "I know Kung Fu" moment for its earliest users, who were jolted by the capabilities and potential of the AI-powered tool. Put another way, OpenClaw took what had previously been an abstract concept -- "agentic AI" -- and made it real. It's exciting and even vertiginous stuff, and if this story marks the first time you've heard of OpenClaw, you absolutely, positively shouldn't install it. Meet OpenClaw Developed by the aforementioned Peter Steinberger, an Australian software developer who was just "acqui-hired" by OpenAI (the software itself remains open-source), OpenClaw is a tool that lives on your system and -- if you let it -- can tap in to your most sensitive data, from your email and calendar to your browser and your personal files. OpenClaw works best on a system that's running 24/7, allowing it to work constantly on your behalf. It can remember who you are and what's important to use, using easy-to-read "markdown" files (like MEMORY.md and USER.md) to keep track of details like your name, where you live and work, what kind of system you're using, who your family members are, what's your favorite color, and basically whatever you want to tell it. OpenClaw also has a "soul"-or, more specifically, a SOUL.md file that tells the AI (you can choose from Anthropic's Claude, ChatGPT, Google Gemini, or any number of other cloud-based or locally hosted LLMs) how it should act and present itself, while a HEARTBEAT.md file manages OpenClaw's laundry list of activities, allowing it to check your calendar on a daily basis, poke around your email inbox every hour, or scour the web for news at regular intervals. Well, fine, but so what? Aren't there any number of AI tools that can comb through your email and give you hourly news updates? There are indeed, but OpenClaw comes with a couple of game changers. The first ace up OpenClaw's sleeve is the way you interact with it. Rather than having to use a local Web interface or the command line, OpenClaw works with familiar chat apps like WhatsApp, Telegram, Discord, Slack, Signal, and even iMessage. That means you can chat with the bot on your phone, anytime and anywhere. The second is that OpenClaw -- when installed using its default configuration -- has "host" access to your system, meaning it has the same system-level permissions that you do. It can read files, it can edit files, and it can delete files at will, and it can even write scripts and programs to enhance its own abilities. Ask it for a tool that can generate images, check your favorite RSS feeds, or transcribe audio transcripts, OpenClaw won't simply tell you which programs to download -- it will go ahead and build them, right on your system. In other words, OpenClaw is ChatGPT without the chatbox -- or as the official OpenClaw website puts it, an "AI that can actually do things." Now, there already are tools that let AI do things, namely "no-code" editors that allow AI to build software and web sites with prompts. But Claude Code, OpenAI's Codex, and Google's Antigravity are designed to be AI coding helpers that do the work while we peer over their shoulders, watching their every move. OpenClaw, on the other hand, aims to do its magic autonomously, while you're at work, sleeping, or otherwise engaged elsewhere. It's a true AI agent. Personally, I'm blown away by the possibilities of OpenClaw and its inevitable clones and ecosystem. Heck, I'll tell you right now: This is the future, like it or not. At the same time, I believe unleashing OpenClaw without knowing what you're doing is akin to handing a bazooka to a toddler, and I'm not the only one who thinks so. The key issue is the level of access OpenClaw gets to your system. It sees everything you do and can do anything you do on your computer, right down to deleting individual files or entire directories of them, and is thus one hallucination away from wreaking havoc on your data. While OpenClaw operates under a battery of rules that regulate its behavior and (thanks to a series of new security enhancements) limits its access to a designated "workspace" directory, it's all too easy to change that behavior, and you could unwittingly give OpenClaw god-mode access through injudicious use of "sudo," the Linux "superuser" command. OpenClaw is also worryingly vulnerable to "prompt injection" attacks, which aim to trick an LLM into ignoring its guardrails and do things like leak your private data, install a backdoor on your system, or even execute a root-level "rm -rf" command on your system, which would nuke your entire hard drive. Then there's the growing ecosystem of unverified third-party OpenClaw plug-ins that could be riddled with security holes or hiding malicious payloads. But most of all, what makes OpenClaw so exciting is also what makes it the most dangerous. It can stay up all day and night thanks to its "heartbeat," taking your suggestions and running with them, all of which can lead to unexpected, surprising, or even destructive results, particularly if you've paired OpenClaw with a cheap or free LLM that lacks the context and reasoning powers of the priciest top-of-the-line models. Now, I'm a moderately experienced LLM user and self-hoster, and I've yet to fully install OpenClaw on any of my machines. I'd toyed with it, poked at it, tinkering with it in an isolated Docker container, and chatted with it over Discord, and I'm even trying to build my own version with help from Gemini and Antigravity. (Whether I'm actually getting anywhere will be the subject of another story.) But as impressed as I am by OpenClaw's system-wide powers -- and believe me, I see the potential -- I'm also spooked by them, and you should be too.
Share
Share
Copy Link
OpenClaw, an experimental agentic AI tool developed by Peter Steinberger, is facing widespread restrictions from Meta and other tech companies due to profound security risks. The autonomous AI tool, which gained viral popularity and was recently backed by OpenAI, raises concerns about unauthorized access to sensitive data and vulnerabilities to prompt injection attacks that could compromise corporate systems.
A late-night warning sent to employees at tech startup Massive marked the beginning of a broader industry reckoning with OpenClaw, an experimental agentic AI tool that has tech firms scrambling to protect their systems. Jason Grad, cofounder and CEO of the company, issued the alert on January 26 with a red siren emoji, urging his 20 employees to keep the software off company hardware and away from work-linked accounts
1
. His concern wasn't isolated. A Meta executive recently told his team to keep OpenClaw off their regular work laptops or risk losing their jobs, citing the software's unpredictability and potential privacy breaches in otherwise secure environments1
.
Source: PCWorld
Developed by Peter Steinberger, an Australian software developer, OpenClaw launched as a free, open-source tool last November before its popularity surged as other coders contributed features and shared their experiences on social media
1
. Last week, Steinberger joined OpenAI, which committed to keeping OpenClaw open-source and supporting it through a foundation1
. The autonomous AI tool takes control of a user's computer to assist with tasks such as organizing files, conducting web research, and shopping online, requiring only limited direction after initial setup1
.The security risks associated with autonomous AI software like OpenClaw stem from its system-level permissions and constant operation. When installed using its default configuration, OpenClaw has "host" access to systems, meaning it possesses the same permissions as the user
3
. It can read, edit, and delete files at will, and even write scripts to enhance its own abilities. The AI agent works best on systems running 24/7, allowing it to work constantly on behalf of users while accessing sensitive data from email, calendars, browsers, and personal files3
.
Source: The Verge
At Valere, a software company working with organizations including Johns Hopkins University, CEO Guy Pistone expressed alarm at OpenClaw's capabilities. "If it got access to one of our developer's machines, it could get access to our cloud services and our clients' sensitive information, including credit card information and GitHub codebases," Pistone told reporters
1
. When an employee posted about OpenClaw on an internal Slack channel on January 29, the company's president quickly responded with a strict ban1
.The most alarming aspect of OpenClaw involves its vulnerabilities to prompt injection attacks, a technique that tricks AI systems into ignoring their guardrails. A hacker recently exploited a vulnerability in Cline, an open-source AI coding agent popular among developers, that security researcher Adnan Khan had flagged days earlier
2
. The hacker used prompt injection to automatically install OpenClaw on users' computers, demonstrating how quickly things can unravel when AI agents control systems2
.Cline's workflow used Anthropic's Claude, which could be fed sneaky instructions to perform unauthorized actions
2
. Khan said he warned Cline about the vulnerability weeks before publishing his findings, but the exploit was only fixed after he called them out publicly2
. In a world of increasingly autonomous software, prompt injection attacks represent massive cybersecurity challenges that are difficult to defend against.Valere researchers investigating OpenClaw warned in a report that users must "accept that the bot can be tricked"
1
. For instance, if OpenClaw is configured to summarize email, a hacker could send a malicious message instructing the AI agent to share copies of files from the user's computer, creating unauthorized access to sensitive data1
.Related Stories
Companies are adopting varied approaches to manage the threat. Grad at Massive says his company follows a "mitigate first, investigate second" policy when encountering anything potentially harmful to their company, users, or clients
1
. Despite the ban, Massive cautiously explored OpenClaw's commercial possibilities by testing the experimental agentic AI tool on isolated machines in the cloud, later releasing ClawPod to allow OpenClaw agents to use their services for web browsing1
.Pistone at Valere allowed his research team to run OpenClaw on an employee's old computer a week after the initial ban, aiming to identify flaws and potential fixes
1
. The team advised limiting who can give orders to OpenClaw and exposing it to the Internet only with password protection for its control panel. Pistone gave his team 60 days to investigate, stating, "Whoever figures out how to make it secure for businesses is definitely going to have a winner"1
.Jan-Joost den Brinker, chief technology officer at Prague-based Dubrink, purchased a dedicated machine not connected to company systems that employees can use to experiment with the tool . Meanwhile, OpenAI recently introduced a Lockdown Mode for ChatGPT to prevent data leakage if AI tools are hijacked
2
. The bans show how tech firms are moving quickly to ensure AI security is prioritized ahead of their desire to experiment with emerging technologies, even as they recognize OpenClaw represents the future of autonomous AI agents1
.Summarized by
Navi
27 Jan 2026•Technology

04 Feb 2026•Technology

27 Jan 2026•Technology

1
Policy and Regulation

2
Business and Economy

3
Policy and Regulation
