10 Sources
10 Sources
[1]
Ready For Clawdbot To Click And Claw Its Way Into Your Environment?
The (AI) Butler Did It If you hang out in the same corners of the internet that I do, chances are you've seen Clawdbot, the AI butler in action. You've seen the screenshots that show empty inboxes an AI cleaned up. You likely read stories about personal bots that write code all night and send cheerful status updates in the morning. Maybe you've seen pics of neat Mac Mini stacks with captions that basically say, "I bought this so my AI butler has somewhere to live" and "I bought a second so my AI assistant could have an AI assistant." Clawdbot went viral because Clawdbot looks FUN. I almost set up a Clawdbot system myself just to see what all the buzz was about. Then I stopped and thought about my actual life. I realized that...I don't really need this. I think it's cool. I want to use it. I want to need it. I just cannot find enough real use cases in my own day to justify giving an AI that level of access. Or, realistically, I realized I didn't need it for personal use. But for work...I could see dozens of use cases right away. Clawdbot feels magical for individual power users to plow through work. However, AI tools like Clawdbot are terrifying when you map their use into an enterprise threat model. Do I think Clawdbot is barging into your enterprise today or tomorrow? No. But, history teaches us that users find ways to make their work lives easier all the time and AI butlers like Clawdbot foretell the future. Clawdbot Is The AI Butler Users Already Love Clawdbot is a self-hosted personal assistant that runs on your own hardware (or cloud instance) and wires itself into the tools you already use. It connects to chat platforms like WhatsApp, Telegram, Slack, Signal, Microsoft Teams (ahem), and others. It forwards your instructions to large language models (LLMs) like Claude, and it can act on those instructions with access to files, commands, and a browser. A few themes dominate the conversation from early adopters, including: * It's a single assistant across everything. Users talk to the same bot in chat, on mobile, and in other channels. The gateway keeps long term memory and summarizes past interactions, so the assistant feels persistent and personal. It remembers projects, preferences, even small quirks, and it starts to anticipate the next step. It becomes the interface between the user and various tools. * Clawdbot doesn't just give simple answers, it takes initiative. The agent does not wait for prompts. It sends morning briefings. It watches inboxes and suggests drafts. It monitors calendars, wallets, and websites, then alerts you when something changes. It behaves more like an assistant than a static tool. * It features real-world automation. Skills let it run commands, organize files, fill out web forms, and interact with devices. The community keeps adding more. Some stories even describe the agent that writes its own skills to reach new systems when users ask for something it can't do (yet). * Everyone gets a Mac Mini now. Because this setup works best on an always-on box, many enthusiasts have bought a Mac Mini just to host their personal AI butler. That trend shows up in social media posts that celebrate dedicated hardware purchases and even small Mac Mini clusters for automation. From a user perspective this feels COOL. It seems like this is what AI should do for us. From a security perspective it looks like a very effective way to drop a new and very powerful actor into your environment with zero guardrails. That personal moment where I almost installed Clawdbot matters. I spend my time thinking about threat models, securing AI, and security outcomes. If anyone can rationalize a lab project in the name of research, it's me. I still looked at the required level of access and decided that my personal life does not justify it. My personal calendar does not need an autonomous agent that can run shell commands. My personal email does not need an extra brain in the middle that reads everything and can act on anything. But there's that temptation that my work life...my work life...could really use something like this. How could an AI butler help my work life? My first thought is...email. There are the dozens of meeting requests for RSAC. Then there are the emails about when I'll be traveling to the west coast, asking if I can squeeze in a few more client engagements before the end of February, or if I can make time to meet with an APAC client in the late evening. Then there are those Teams messages I made the mistake of reading, so they aren't showing as unread anymore. Oh, then there's that Excel data analysis I want to do for that report that I've been talking about forever. The list goes on. Employees in your company will face the same temptation. They see the same buzz I do. They will watch the same videos and read the same glowing threads. Some will think, "I can use this at work and become twice as productive!" Welcome to your nightmare. So, before a hobbyist introduces a silent superuser into your environment that operates as an agent running with root level permissions that turns every command channel into a prompt injection magnet...take some steps. Take Practical Steps Before An AI Butler Barges In Your Door It's inevitable that users will try to use these tools at work. Maybe they're already doing it. Take practical steps to gain control by: Publishing a clear position on self-hosted AI agents. State whether or not staff may run personal agents with work identities or data. Make your default answer very conservative. If you allow limited pilots, define where, how, and under whose oversight those can be run. Ensure that you note the difference between AI applications and personal agents. Users may not understand the difference as well as you do. Requiring isolation and separate identities for any sanctioned pilots. Insist on dedicated devices or virtual machines for agents. Use separate bot accounts with restricted permissions rather than full user accounts. Don't allow those agents to touch crown jewel systems or data until you design a proper pattern. Forcing human approval for risky or irreversible actions. Use policy and technical controls that require explicit confirmation before agents send external email, delete data, change production systems, or access sensitive client information. Treat the agent as you would a very fast but very literal junior employee. Adding AI agent signals to your shadow IT playbook. Look for model API traffic from unexpected hosts. Watch for unapproved automation that spans multiple systems. Educating enthusiasts instead of just blocking them. Your power users will experiment no matter what you say. Give them a channel to do it safely. Share the risks that the report outlines. Explain prompt injection in plain language. Ask them to help you test guardrails rather than work around them. Ensuring your email, messaging, and collaboration security solution is ready for "email salting." Just in case an AI butler is lurking in the shadows of your enterprise, your solution, which by now should include AI/ML content analysis, must be tuned to detect hidden characters, zero‑font, and white‑on‑white text, enforce SPF/DKIM/DMARC to cut spoofed or "salted" messages designed to give AI agents or bots nefarious instructions. A Simple And Slightly Funny Detection Hint You already track strange authentication events, impossible travel, unusual data movement, and many other classic signals. You should add one very human signal to the list. If you start to see a wave of procurement requests for Mac Mini hardware from developers, operations teams, or the one person who always builds side projects in the corner, treat that as a soft but real indicator for personal AI butler . A final thought for security leaders: the AI butler wave will not wait for your policies to catch up, and your users will not self regulate. Clawdbot and tools like it thrive because they feel helpful, personal, and frictionless, which is exactly why they become dangerous when they slip into enterprise environments without oversight. Treat this moment as early warning of what's coming in the next phase of AI adoption: hyper-personalized, action-oriented, integration-focused assistants. Use the runway you have now to finetune policies, educate enthusiasts, and tune your detection strategies.
[2]
Clawdbot Is the Hot New AI Agent, But Its Creator Warns of 'Spicy' Security Risks
The internet's latest AI obsession is a lobster-inspired agentic assistant called Clawdbot. It's not particularly common for an open-source AI tool to go viral, given its fairly niche audience and the technical know-how required to set it up on GitHub. So, this one caught our attention. It also reached Anthropic, which asked Clawdbot developers to change the tool's name due to its similarity to the Claude AI chatbot. It complied, so Clawdbot has now been renamed Moltbot. "Honestly? 'Molt' fits perfectly -- it's what lobsters do to grow," the team says. Whatever you call it, Clawdbot/Moltbot is free to download, but it'll cost about $3-$5 per month to run on a basic Virtual Private Server (VPS). Some people have had success setting it up on AWS's free tier. Contrary to the impression social media posts can give, you do not need an Apple Mac mini to run it, according to Clawdbot's creator Pete Steinberger. Clawdbot/Moltbot will run on any computer, including that old laptop collecting dust in your closet. Steinberger's X bio claims he "came back from retirement to mess with AI and help a lobster take over the world." Yet momentum around agentic assistants largely petered out late last year. Perplexity's Comet browser felt half-baked and not entirely useful, our analyst Ruben Cirelli found. OpenAI warned that its Atlas AI browser may purchase the wrong product on your behalf, and is vulnerable to prompt injection attacks. Will Steinberger's tool revive interest? Should it? The defining features of Clawdbot/Moltbot are that it can (1) proactively take actions without you needing to prompt it, and (2) make those decisions by accessing large swaths of your digital life, including your external accounts and all the files on your computer, sort of like Claude Cowork. It might clear out your inbox, send a morning news briefing, or check in for your flight. When it's done, it'll message you through your app of choice, such as WhatsApp, iMessage, or Discord. This open access has raised security concerns. Support documentation even acknowledges that "Running an AI agent with shell access on your machine is... spicy. There is no 'perfectly secure' setup." You can run it on the AI model of your choice, either locally or in the cloud. "For an agent to be useful, it must read private messages, store credentials, execute commands, and maintain persistent state," says threat intelligence platform SOCRadar. "Each requirement undermines assumptions that traditional security models rely on." SOCRadar recommends treating Clawdbot/Moltbot as "privileged infrastructure" and implementing additional security precautions. "The butler can manage your entire house. Just make sure the front door is locked." Some argue that keeping data local enhances security, but Infostealers notes that hackers are finding ways to tap into local data, a treasure trove for nefarious actors. "The rise of 'Local-First' AI agents has introduced a new, highly lucrative attack surface for cybercriminals," it says. "ClawdBot...offers privacy from big tech, [but] it creates a 'honey pot' for commodity malware." The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says. Developers have begun sharing steps they've taken to shore up security. "Start with the smallest access that still works, then widen it as you gain confidence," Clawdbot/Moltbot recommends.
[3]
Exploring Clawdbot, the AI agent taking the internet by storm -- AI agent can automate tasks for you, but there are significant risks involved
The new pseudo-locally-hosted gateway for agentic AI offers a sneak peek at the future -- both good and bad. If you've spent any time in AI-curious corners of the internet over the past few weeks, you've probably seen the name "Clawdbot" pop up. The open-source project has seen a sudden surge in attention, helped along by recent demo videos, social media chatter, and the general sense that "AI agents" are the next big thing after chatbots. For folks encountering it for the first time, the obvious questions follow quickly: What exactly is Clawdbot? What does it do that ChatGPT or Claude don't? And is this actually the future of personal computing, or a glimpse of a future we should approach with caution? The developers of Clawdbot position it as a personal AI assistant that you run yourself, on your own hardware. Unlike chatbots accessed through a web interface, Clawdbot connects to messaging platforms like Telegram, Slack, Discord, Signal, or WhatsApp, and acts as an intermediary: you talk to it as if it were a contact, and it responds, remembers, and (crucially) acts, by sending messages, managing calendars, running scripts, scraping websites, manipulating files, or executing shell commands. That action is what places it firmly in the category of "agentic AI," a term increasingly used to describe systems that don't just answer questions, but take steps on a user's behalf. Technically, Clawdbot is best thought of as a gateway rather than a model, as it doesn't include an AI model of its own. Instead, it routes messages to a large language model (LLM), interprets the responses, and uses them to decide which tools to invoke. The system runs persistently, maintains long-term memory, and exposes a web-based control interface where users configure integrations, credentials, and permissions. From a user perspective, the appeal is obvious. You can ask Clawdbot to summarize conversations across platforms, schedule meetings, monitor prices, deploy code, clean up an inbox, or run maintenance tasks on a server, for example, all through natural language. It's the old "digital assistant" promise, but taken more seriously than voice-controlled reminders ever were. In that sense, Clawdbot is less like Apple's Siri and more like a junior sysadmin who never sleeps, at least theoretically. Not exactly as "local" as often advertised by fans We should clarify one important detail obscured by the hype, though: by default, Clawdbot does not run its AI locally, and doing so is non-trivial. Most users connect it to cloud-hosted LLM APIs from providers like OpenAI, or indeed, Anthropic's "Claude" series of models, which is where the name comes from. Running a local model is possible, but doing so at a level that even approaches cloud-hosted frontier models requires substantial hardware investment in the form of powerful GPUs, plenty of memory, and a tolerance for tradeoffs in speed and quality. For most users, "self-hosted" refers to the agent infrastructure, not the intelligence itself. Messages, context, and instructions still pass through external AI services unless the user goes out of their way to avoid that. This architectural choice matters because it shapes both the benefits and the risks. Clawdbot is powerful precisely because it concentrates access. It has all of your credentials for every service it touches because it needs them. It reads all of your messages because that's the job. It can run commands because otherwise it couldn't automate anything. In security terms, it becomes an extremely high-value target; a single system that, if compromised, exposes a user's entire digital life. That risk was illustrated recently by security researcher Jamieson O'Reilly, who documented how misconfigured Clawdbot deployments had left their administrative interfaces exposed to the public internet. In hundreds of cases, unauthenticated access allowed outsiders to view configuration data, extract API keys, read months of private conversation history, impersonate users on messaging platforms, and even execute arbitrary commands on the host system, sometimes with root access. The specific flaw O'Reilly identified, a reverse-proxy configuration issue that caused all traffic to be treated as trusted, has since been patched. Focusing on the patch misses the point, though. The incident wasn't notable because it involved a clever exploit; it was notable because it exposed the structural risks inherent in agentic systems. Even when correctly configured, tools like Clawdbot require sweeping access to function at all. They must store credentials for multiple services, read and write private communications, maintain long-term conversational memory, and execute commands autonomously. This can technically still conform to the principle of least privilege, but only in the narrowest sense; the "least" privilege an agent needs to be useful is still an extraordinary amount of privilege, concentrated in a single always-on system. Fixing one misconfiguration doesn't meaningfully reduce the blast radius if another failure occurs later, and experience suggests that eventually, something always does. Agentic AI is awfully convenient, but great caution is advised Skepticism about agentic AI is less about fear of the technology and more about basic systems thinking. Large language models are very explicitly not agents in the human sense. They don't understand intent, responsibility, or consequence. They are essentially very advanced heuristic engines that produce statistically plausible responses based on patterns, not grounded reasoning. When such systems are given the authority to send messages, run tools, and make changes in the real world, they become powerful amplifiers of both productivity and error. It's worth noting that much of what Clawdbot does could be accomplished without an AI model in the mix at all. Regular old deterministic scripts, cron jobs, workflow engines, and other traditional automation tools can already monitor systems, move data, trigger alerts, and execute commands with far more predictability. The neural network enters the picture primarily to translate vague human language into structured actions, and that convenience is real, but it comes at the cost of opacity and uncertainty. When something goes wrong, the failure mode isn't always obvious, or even immediately visible to the user. There is also a quieter, more practical cost to agentic AI that often gets overlooked, as many of its most ardent supporters were already paying for it, and that cost is simple: money. Most Clawdbot deployments rely on cloud-hosted AI models accessed through paid APIs, not local inference. Unlike webchat interfaces that are typically metered in the number of responses, API usage is metered by tokens. That means every message, every summary, every planning step costs something. Agentic systems tend to be especially expensive because they are "chatty" behind the scenes, constantly maintaining context, evaluating conditions, and looping through tool calls. An always-on agent mediating multiple message streams can burn through tens or hundreds of thousands of tokens per day without doing anything particularly dramatic. Over the course of a month, that turns into a nontrivial bill, effectively transforming a personal assistant into a small but persistent operating expense. Against this backdrop, the broader industry rhetoric starts to look a little unmoored. For example, Microsoft has openly discussed its ambition to turn Windows into an "agentic OS," where users abandon keyboards and mice in favor of voice-controlled AI agents by the end of the decade. The idea that most people will happily hand continuous operational control of their computers to probabilistic systems by 2030 deserves, at the bare minimum, a raised eyebrow. History suggests that users adopt alternative input methods and automation selectively, not wholesale, particularly when the stakes involve the loss of privacy, data, or indeed, money. Clawdbot is a glimpse at the future To be clear, none of this means Clawdbot is a bad project. In fact, quite to the contrary, it's a clear, well-engineered example of where agentic AI is heading, and also why people find the tech compelling. It's also neither the first nor the last tool of its kind. Similar systems are emerging across open-source communities and enterprise platforms alike, all promising to turn intent into action with minimal friction. The more important takeaway is that tools like Clawdbot demand a level of technical understanding and operational discipline that most users simply don't have. Running your own Clawdbot requires setting up a Linux server, configuring authentication and security settings, managing permissions and a command whitelist, and a comprehensive grasp of sandboxing. Running an always-on agent with access to credentials, messaging platforms, and system commands is not the same as opening a chat window in a browser, and it never will be. For many people, the safer choice will remain traditional cloud AI interfaces, where the blast radius of a mistake is smaller and the responsibility boundary clearer. Agentic AI may well become a foundational layer of future computing, but if Clawdbot is any indication, that future will require more caution, not less.
[4]
Clawdbot becomes Moltbot, but can't shed security concerns
The massively hyped agentic personal assistant has security experts wondering why anyone would install it Security concerns for the new agentic AI tool formerly known as Clawdbot remain, despite a rebrand prompted by trademark concerns raised by Anthropic. Would you be comfortable handing the keys to your identity kingdom over to a bot, one that might be exposed to the open internet? Clawdbot, now known as Moltbot, has gone viral in AI and developer circles in recent days, with fans hailing the open-source "AI personal assistant" as a potential breakthrough. The long and short of it is that Moltbot can be controlled using messaging apps, like WhatsApp and Telegram, in a similar way to the GenAI chatbots everyone knows about. Taking things a little further, its agentic capabilities allow it to take care of life admin for users, such as responding to emails, managing calendars, screening phone calls, or booking table reservations - all with minimal intervention or prompting from the user. All that functionality comes at a cost, however, and not just the outlay so many seem to be making on Mac Mini purchases for the sole purpose of hosting a Moltbot instance. In order for Moltbot to read and respond to emails, and all the rest of it, it needs access to accounts and their credentials. Users are handing over the keys to their encrypted messenger apps, phone numbers, and bank accounts to this agentic system. Naturally, security experts have had a few things to say about it. First, there was the furor around public exposures. Moltbot is a complex system, and despite being as easy to install as a typical app on the face of it, the misconfigurations associated with it prompted experts to highlight the dangers of running Moltbot instances without the proper know-how. Jamieson O'Reilly, founder of red-teaming company Dvuln, was among the first to draw attention to the issue, saying that he saw hundreds of Clawdbot instances exposed to the web, potentially leaking secrets. He told The Register that the attack model he reported to Moltbot's developers, which involved proxy misconfigurations and localhost connections auto-authenticating, is now fixed. However, if exploited, it could have allowed attackers to access months of private messages, account credentials, API keys, and more - anything to which Clawdbot owners gave it access. According to his Shodan scans, supported by others looking into the matter, he found hundreds of instances exposed to the web. If those had open ports allowing unauthenticated admin connections, it would allow attackers access to the full breadth of secrets in Moltbot. "Of the instances I've examined manually, eight were open with no authentication at all and exposing full access to run commands and view configuration data," he said. "The rest had varying levels of protection. "Forty-seven had working authentication, which I manually confirmed was secure. The remainder fell somewhere in between. Some appeared to be test deployments, some were misconfigured in ways that reduced but didn't eliminate exposure." On Tuesday, O'Reilly published a second blog detailing a proof-of-concept supply chain exploit for ClawdHub - the AI assistant's skills library, the name of which has not yet changed. He was able to upload a publicly available skill, artificially inflate the download count to more than 4,000, and watch as developers from seven countries downloaded the poisoned package. The skill O'Reilly uploaded was benign, but it proved he could have executed commands on a Moltbot instance. "The payload pinged my server to prove execution occurred, but I deliberately excluded hostnames, file contents, credentials, and everything else I could have taken," he said. "This was a proof of concept, a demonstration of what's possible. In the hands of someone less scrupulous, those developers would have had their SSH keys, AWS credentials, and entire codebases exfiltrated before they knew anything was wrong." ClawdHub states in its developer notes that all code downloaded from the library will be treated as trusted code - there is no moderation process at present - so it's up to developers to properly vet anything they download. Therein lies one of the key issues with the product. It is being heralded by nerds as the next big AI offering, one that can benefit everyone, but in reality, it requires a specialist skillset in order to use safely. Eric Schwake, director of cybersecurity strategy at Salt Security, told The Register: "A significant gap exists between the consumer enthusiasm for Clawdbot's one-click appeal and the technical expertise needed to operate a secure agentic gateway. "While installing it may resemble a typical Mac app, proper configuration requires a thorough understanding of API posture governance to prevent credential exposure due to misconfigurations or weak authentication. "Many users unintentionally create a large visibility void by failing to track which corporate and personal tokens they've shared with the system. Without enterprise-level insight into these hidden connections, even a small mistake in a 'prosumer' setup can turn a useful tool into an open back door, risking exposure of both home and work data to attackers." The security concerns surrounding Moltbot persist even when it is set up correctly, as the team at Hudson Rock pointed out this week. Its researchers said they looked at Moltbot's code and found that some of the secrets shared with the assistant by users were stored in plaintext Markdown and JSON files on the user's local filesystem. The implication here is that if a host machine, such as one of the Mac Minis being bought en masse to host Moltbot, were infected with infostealer malware, then it would mean the secrets stored by the AI assistant could be compromised. Hudson Rock is already seeing malware as a service families implement capabilities to target local-first directory structures, such as those used by Moltbot, including Redline, Lumma, and Vidar. It is fathomable that any of these popular strains of malware could be deployed against the internet-exposed Moltbot instances to steal credentials and carry out financially motivated attacks. If the attacker is also able to gain write access, then they can turn Moltbot into a backdoor, instructing it to siphon sensitive data in the future, trust malicious sources, and more. "Clawdbot represents the future of personal AI, but its security posture relies on an outdated model of endpoint trust," said Hudson Rock. "Without encryption-at-rest or containerization, the 'Local-First' AI revolution risks becoming a goldmine for the global cybercrime economy." The start of something bigger O'Reilly said that Moltbot's security has captured the attention of the industry recently, but it is only the latest example of experts warning about the risks associated with wider deployments of AI agents. In a recent interview with The Register, Palo Alto Networks chief security intel officer Wendi Whitmore warned that AI agents could represent the new era of insider threats. As they are deployed across large organizations, trusted to carry out tasks autonomously, they become increasingly attractive targets for attackers looking to hijack these agents for personal gain. The key will be to ensure cybersecurity is rethought for the agentic era, ensuring each agent is afforded the least privileges necessary to carry out tasks, and that malicious activity is monitored stringently. "The deeper issue is that we've spent 20 years building security boundaries into modern operating systems," said O'Reilly. "Sandboxing, process isolation, permission models, firewalls, separating the user's internal environment from the internet. All of that work was designed to limit blast radius and prevent remote access to local resources. "AI agents tear all of that down by design. They need to read your files, access your credentials, execute commands, and interact with external services. The value proposition requires punching holes through every boundary we spent decades building. When these agents are exposed to the internet or compromised through supply chains, attackers inherit all of that access. The walls come down." Heather Adkins, VP of security engineering at Google Cloud, who last week warned of the risks AI would present to the world of underground malware toolkits, is flying the flag for the anti-Moltbot brigade, urging people to avoid installing it. "My threat model is not your threat model, but it should be. Don't run Clawdbot," she said, citing a separate security researcher who claimed Moltbot "is an infostealer malware disguised as an AI personal assistant." Principal security consultant Yassine Aboukir said: "How could someone trust that thing with full system access?" ®
[5]
Clawdbot AI assistant: What it is, how to try it
Interest in Clawdbot, an open-source AI personal assistant, has been building from a simmer to a roar. Over the weekend, online chatter about the tool reached viral status -- at least, as viral as an open-source AI tool can be. Clawdbot has developed a cult following in the early adopter community, and AI nerds in Silicon Valley are obsessively sharing best practices and showing off their DIY Clawdbot setups. The free, open-source AI assistant is commonly run on a dedicated Mac Mini (though other setups are possible), with users giving it access to their ChatGPT or Claude accounts, as well as email, calendars, and messaging apps. Clawdbot has gone so viral on X that it's reached meme status, with developers sharing tongue-in-cheek memes about their Clawdbot setups. So, what is Clawdbot 🦞, how can you try it, and why is it suddenly the talk of the town in Silicon Valley? Clawdbot is an AI personal assistant As previously mentioned, Clawdbot is an open-source AI assistant that runs locally on your device. The tool was built by developer and entrepreneur Peter Steinberger, best known for creating and selling PSPDFKit. The tool is often associated with the lobster emoji, for reasons that should be obvious. Clawdbot is an impressive example of agentic AI, meaning it's a tool that can act autonomously and complete multi-step actions on behalf of the user. The year 2025 was supposed to be the year of AI agents; instead, many high-profile agentic AI implementations failed to deliver results, and there's a growing sense that AI agents are hitting a wall. However, Clawdbot users say that the tool delivers where previous assistants have failed. The personal AI assistant remembers everything you've ever told it, and users can also grant it access to their email, calendar, and docs. On top of that, Clawdbot can proactively take personalized action. So, not only does Clawdbot check your email, but it can send you a message the moment a high-priority email arrives. Based on its viral success, I'd be shocked if Steinberger isn't being courted by AI companies like OpenAI and Anthropic. Mashable reached out to Steinberger to ask about Clawdbot, and we'll update this post if we receive a response. How to try Clawdbot Steinberger has uploaded the source code for Clawdbot to Github, and you can download, install, and start experimenting with Clawdbot right away. (Find Clawdbot on Github.) That said, downloading and setting up Clawdbot isn't as simple as downloading a typical app or piece of software. You'll need some technical know-how to get Clawdbot running on your device. There are also some serious security and privacy concerns to consider. More on that in a moment. You can run Clawdbot on Mac, Windows, and Linux devices, and the Clawdbot website has installation instructions, system requirements, and tips. Don't try Clawdbot without understanding the risks Part of the reason that Clawdbot succeeds where other AI agents have failed is that it has full system access to your device. That means it can read and write files, run commands, execute scripts, and control your browser. Steinberger is clear about the fact that running Clawdbot carries certain risks. Running an AI agent with shell access on your machine is... spicy," an FAQ reads. "Clawdbot is both a product and an experiment: you're wiring frontier-model behavior into real messaging surfaces and real tools. There is no 'perfectly secure' setup." (Emphasis in original.) Users can access a security audit tool for Clawdbot on Github, and the Clawdbot FAQ also has a useful security section. A sub-section titled "The Threat Model" notes that bad actors could "Try to trick your AI into doing bad things" and "Social engineer access to your data."
[6]
Fast-Growing Open-Source AI Assistant Is Testing the Limits of Automation -- and Safety
Heavy token consumption has surprised early adopters, with some developers reporting hundreds of dollars in costs within days of routine use. An open-source AI assistant has exploded across developer communities in recent weeks, racking up over 10,200 GitHub stars and 8,900 Discord members since its January release. Clawdbot promises what Siri never delivered: an AI that actually does things. Alex Finn, CEO of CreatorBuddy, texted his Clawdbot, Henry, to make a restaurant reservation. "When the OpenTable res didn't work, it used its ElevenLabs skill to call the restaurant and complete the reservation," Finn wrote on X. "AGI is here, and 99% of people have no clue." Clawdbot stands out for keeping user context on-device, being open source and shipping at an unusually fast pace, developer Dan Peguine wrote on X on Saturday. It also works across major messaging platforms and offers persistent memory with proactive background tasks that go well beyond a typical personal assistant, he added. Plus, it's pretty easy for everyday users to install. Clawdbot uses the Model Context Protocol to connect AI models like Claude or GPT with real-world actions without human intervention. The system can run locally on just about any hardware and connects through messaging apps you already use -- WhatsApp, Telegram, Discord, Slack, Signal, iMessage. It can execute terminal commands, control browsers, manage files, and make phone calls. From investment advice to OnlyFans account management, anything seems to be possible as long as you have the creativity to build it, the resources to pay for the tokens, and the balls to afford the consequences when things go sideways. Unfettered access Still, Clawdbot is raising concerns among those in the security community who have discovered a problem. AI researcher Luis Catacora ran a Shodan scan and found an issue: "Clawdbot gateways are exposed right now with zero auth (they just connect to your IP and are in)... That means shell access, browser automation, API keys. All wide open for someone to have full control of your device." In effect, powerful systems placed in inexperienced hands have left many machines exposed. The remedy is relatively straightforward: change a gateway binding from a public setting to a local one, then restart. The step is not intuitive, and the default configuration has left many users vulnerable to remote attacks. The recommended response is to immediately restrict network access, add proper authentication and encryption, rotate potentially compromised keys, and implement rate limits, logging, and alerting to reduce the risk of abuse. The system's heavy token usage has surprised users, prompting developers to recommend lower-cost models or local deployments to manage consumption. Federico Viticci at MacStories burned through 180 million tokens in his first week. On Hacker News, one developer reported spending $300 in two days on what they considered "basic tasks." Clawdbot is the creation of Peter Steinberger, founder of PSPDFKit (now called Nutrient), who came out of retirement to build what he calls a "24/7 personal assistant." For now, given the costs, it is recommended to be careful about what you ask your assistant to do. The project documentation includes a security guide and diagnostic commands to check for misconfigurations. The community is shipping fixes at a rapid pace at roughly 30 pull requests daily, but adoption of security safeguards still lags behind installation rates.
[7]
AI Enthusiasts Are Running 'Clawdbot' on Their Mac Minis, but You Probably Shouldn't
There are some serious security risks in letting a bot like this take over your entire computer. I am a self-professed AI skeptic. I have yet to really find much of a need for all these AI-powered assistants, as well as many AI-powered features. The most useful applications in my view are subtle -- the rest seem better suited for shareholders than actual people. And yet, the AI believers have a new tool they're very excited about, which is now all over my feeds: Clawdbot. Could this agentic AI assistant be the thing that makes me a believer as well? Spoiler alert: probably not. What is Clawdbot? If you're deep in the online AI community, you probably already know about Clawbot. For the rest of us, here's the gist: Clawdbot is a "personal AI assistant" designed to run locally on your devices, as opposed to cloud-based options. (Think ChatGPT, Gemini, or Claude.) In fact, Clawdbot runs any number of AI models, including those from Anthropic, OpenAI, Google, xAI, and Perplexity. While you can run Clawdbot on Mac, Linux, and Windows, many online are opting to install the bot on dedicated Mac mini setups, leading to one part of the assistant's virality. But there are other AI assistants that can be run locally -- one thing that makes Clawdbot unique is that you communicate with it through chat apps. Which app you use is up to you, as Clawdbot works with apps like Discord, Google Chat, iMessages, Microsoft Teams, Signal, Telegram, WebChat, and WhatsApp. The idea is that you "text" Clawdbot as you would a friend or family member, but it acts as you'd expect an AI assistant to -- except, maybe more so. That's because, while Clawdbot can certainly do the things an AI bot like ChatGPT can, it's meant more for agentic tasks. In other words, Clawdbot can do things for you, all while running in the background on your devices. The bot's official website advertises that it can clear your inbox, send emails, manage your calendar, and check you in for flights -- though power users are pushing the tool to do much more. Clawdbot works with a host of apps and services you might use yourself. That includes productivity apps like Apple Notes, Apple Reminders, Things 3, Notion, Obsidian, Bear Notes, Trello, GitHub; music apps like Spotify, Sonos, and Shazam; smart home apps like Philips Hue, 8Sleep, and Home Assistant; as well as other major apps like Chrome, 1Password, and Gmail. It can generate images, search the web for GIFs, see your screen, take photos and videos, and check the weather. Based on the website alone, it has a lengthy résumé. The last big point here is that Clawdbot has an advertised "infinite" memory. That means the bot "remembers" every interaction you've ever had with it, as well as all the actions it's taken on your behalf. In theory, you could use Clawdbot to build apps, run your home, or manage your messages, all within the context of everything you've done before. In that, it'd really be the closest thing to a "digital assistant" we've seen on this scale. These assistants have been mostly actionable -- you ask the bot what you want to know or what you want done, and it (hopefully) acts accordingly. But the ideal version of Clawdbot would do all those things for you without you needing to ask. It's not just fans talking about Clawdbot Not everyone is psyched about Clawdbot, though. Take this user, who jokes that, after four messages, the bot made a reservation, then, after six messages, was able to send a calendar invite, only to cost $87 in Opus 4.7 tokens. This user came up with a story (at least I hope it's a story) where they give Clawdbot access to their stock portfolio and tasked it with making $1 million without making mistakes. After thousands of reports, dozens of strategies, and many scans of X posts, it lost everything. "But boy was it beautiful." I particularly like this take, which reads: "[I've] made a tragic discovery using [Clawdbot.] [There] simply aren't that many tasks in my personal life that are worth automating." There are also some jabs from what appear to be anti-AI users, like this one, that imagines a Clawdbot user with no job living in their parent's basement, asking the bot to do their tasks for the day. As with all things AI, there are many thoughts, opinions, and criticisms here, especially considering how viral this new tool is. But the main critique seems to be that Clawdbot requires a lot (in terms of hardware, power, and privacy) without really offering much in return. Sure, it can do things for you, but do you really need a bot booking your plane tickets, or combing through your emails? The answer to that, I suppose, is up to each of us, but the "backlash," if you can call it that, is likely coming from people who would answer "no." How to try Clawdbot If you want to try Clawdbot, you'll likely need to have some technical experience first. You can get started from Clawdbot's official github page, as well as Clawdbot's "Getting started" guide. According to this page, you'll begin by running the Clawdbot onboarding wizard, which will set you up with the gateway, workspace, channels, and skills. This works on Mac, Linux, and Windows, and while you won't need a Mac mini, it seems to be what the Clawdbot crowd is running with. Full disclosure: Clawdbot and its setup go beyond my expertise, and I will not be installing it on my devices. However, if you have the knowledge to follow these instructions, or the will to learn, the developer has the steps listed in the links above. How secure is Clawdbot? While I likely wouldn't install Clawdbot on my device anyway, the privacy and security implications here definitely keep me away. The main issue with Clawdbot is that it has full control and access over whichever device you run it on, as well as any of the software that is running therein. That makes sense, on the surface: How is an agentic AI supposed to do things on your behalf if it does have access to the apps and hardware necessary for execution? But the inherent security risk with any program like this involves prompt injection. Bad actors could sneak their own AI prompts into otherwise innocent sites and programs. When your bot crawls the text as it completes your task, it intercepts the prompt, and, thinking it's from you, executes that prompt instead. It's the main security flaw with AI browsers, and it could affect something like Clawdbot, too. And since you've given Clawdbot control over your entire computer and everything in it...yikes. Bad actors could manipulate Clawdbot to theoretically send DMs to anyone they like, run malicious programs, read and write files on your computer, trick Clawdbot into accessing your private data, and learn about your hardware for further cyber attacks. In Clawdbot's case, these prompt injections could come from a number of sources. They could come from messages via bad actors through the chat apps you communicate through Clawdbot, they could come from the browsers you use to access the internet, and they could come from plugins you run on various programs, to name a few possibilities. Clawdbot does have a security guide on its site that walks you through ways to shore up your defenses while using Clawdbot. The developer admits that running an AI agent with shell access on your machine is "spicy," that this is both a product and an experiment, and that there is no "perfectly secure" setup. That said, there are security features built in here that serve a purpose and attempt to limit who can access Clawdbot, where Clawdbot can go, and what Clawdbot can do. That could involve locking down DMs, viewing links and attachments as "hostile" by default, reducing high-risk tools, and running modern AI models that have better protections against prompt injection. Still, the whole affair is too risky for me, especially considering I'm not sure I really want an AI assistant in the first place. I think companies believe we want to offload tasks like calendars, messages, and creation to bots, to save us time from menial to-do lists. Maybe some do, but I don't. I want to know who is reaching out to me and why, and not trust an AI to decide what messages are worth my attention. I want to write my own emails and know what events I have on my own calendar. I also want access to my own computer. Maybe some people trust AI enough to handle all these things for them -- if it makes me a luddite to feel the opposite, so be it.
[8]
4 things you need to know about Clawdbot (now Moltbot)
If you've been following the AI agent space for more than a week, you know things move at a breakneck pace. But today felt different. Clawdbot, the viral open-source project that's been the talk of the "local-first" AI community, just officially molted. Following a trademark request from Anthropic, the project has rebranded to Moltbot. Same lobster mascot, new shell. I've been living with this agent -- which I affectionately call "Molty" -- running on a dedicated device for the past few days. Here is my unfiltered, veteran take on why this is the most important piece of software you'll install this year, and where the sharp edges are still hiding. 1. The "Claude with hands" reality check We've been promised "agents" for years. Usually, that means a web-based chatbot that can maybe search Google or hallucinate a Python script. Moltbot is different because it lives inside your file system. It's an agentic layer built primarily on Anthropic's Claude 4.5 Opus (though it's flexible), but unlike the web version, it has a "digital body." It can read your , it can move files into your Obsidian vault, and it can literally open a browser window on your machine to fight with a flight-booking UI while you're at lunch. Why the rebrand matters The shift from ClawdBot to Moltbot isn't just legal posturing. It marks the transition from a "hacky experiment" to a "personal OS." The creator, Peter Steinberger (the mind behind PSPDFKit), is leaning into the lobster metaphor: growth requires shedding the old, rigid ways we interact with computers. 2. The setup: "one-liner" vs. reality The marketing says "install in 5 minutes." In my experience, that's true if you're a dev. For everyone else, here's the 2026 hardware/software sweet spot: * The gold standard: A Mac Mini (M4 or newer) is the best "always-on" host. It's silent, power-efficient, and handles the local processing loops without breaking a sweat. * The software hook: You run a simple curl script, but the magic is in the Messaging Gateway. You don't "talk" to Moltbot in a browser. You DM it on Telegram, Signal, or WhatsApp. * The cost: While the software is MIT-licensed (free), don't be fooled. Running an autonomous agent that "thinks" before it acts can burn through API tokens. I've seen heavy users (myself included) hit $20-$50/month in Anthropic/OpenAI credits just by letting the bot "proactively" monitor things. 3. Real-world use cases (what I actually use it for) Forget the "make me a poem" fluff. Here is what Moltbot does in my actual workflow: * Morning briefings: At 8:00 AM, Molty pings my Telegram with a summary of my overnight emails, a weather-adjusted outfit suggestion, and a reminder of the one Jira ticket I've been ignoring. * The "unsubscribe" assassin: I can forward a newsletter to the bot and say, "Find the unsubscribe link and kill this." It opens a headless browser, navigates the "Are you sure?" traps, and confirms when it's done. * Recursive debugging: I point it at a local directory of broken Go code. It runs the tests, sees the failure, edits the file, and repeats the loop until the tests pass. Moltbot is essentially a "shell" for Skills. The community-driven ClawdHub (now transitioning to MoltHub) is where the real power lies. You can "ask" your bot to install a skill, and it will pull the TypeScript or Python code, configure the environment, and suddenly it knows how to: 4. The elephant in the room: Security Giving a third-party AI agent "Full Disk Access" and shell permissions on your primary machine is, objectively, a security nightmare if you aren't careful. Veteran warning: We've already seen reports of "Prompt Injection" attacks where a malicious email could theoretically trick an agent into deleting files or exfiltrating API keys. My advice? Use a dedicated machine or a hardened container. Don't give it your primary bank credentials. Use the "Ask Mode" for any command that involves or sensitive system changes. Moltbot includes a web-based admin panel where you can review every single command it executed -- check your logs. The first time in 20 years of being a tech enthusiast that I've felt like I actually have a digital employee. It's messy, it's occasionally expensive, and it requires a bit of "tinkerers' spirit."
[9]
What's so good (and not so good) about Clawdbot, the viral AI assistant
Clawdbot is an open-source agentic AI assistant that runs locally on users' computers. Created by PSPDFKit founder Peter Steinberger, this does not merely work as a chatbot, but also takes actions on a user's behalf such as monitoring emails and calendars, managing files, etc. Amid the glut of AI tools in existence today, Clawdbot has gone viral of late. It all started when the bot's GitHub repository exploded with thousands of stars (of appreciation) in a single day this month. But what are its promises and pitfalls? What is Clawdbot? It's an open-source agentic AI assistant that runs locally on users' computers. Created by PSPDFKit founder Peter Steinberger, this does not merely work as a chatbot, but also takes actions on a user's behalf such as monitoring emails and calendars, managing files, etc. While the tool connects to other AI models like Claude, ChatGPT, or Gemini, it operates with deep access to a user's own files, apps, and online accounts. Unlike standard chatbots, it remembers context over time, handles ongoing tasks, and can proactively send reminders, briefings, or alerts. Most users interact with it through messaging apps like Telegram, making it feel like texting a personal AI assistant. How does it work? The system works in three layers with an external AI model for intelligence, the Clawdbot software running on your PC, and a messaging interface for interaction. This allows for highly personalised automation, which has made it popular among developers and tech enthusiasts. Because it operates directly on your PC and can access system tools, Clawdbot offers more autonomy and customisation than cloud-based assistants. Its code is publicly available on GitHub, but setup requires technical expertise. Any concerns? The deep system access also creates serious security risks. Clawdbot can read and write files, run commands, and control web browsers as it has access to the operating system. Its own documentation warns that there is no perfectly secure setup, and highlights threats such as attackers manipulating the AI into leaking data or performing harmful actions. While there are security guides and audit tools, users are expected to understand and manage the risks themselves.
[10]
ClawdBot AI Assistant Handles Email, Calendars and Files Locally : Skip the Cloud
What if you could delegate your most tedious daily tasks to an AI assistant that works tirelessly, respects your privacy, and operates entirely on your own hardware? Below, WorldofAI takes you through how ClawdBot, a 24/7 AI agent, is redefining automation by combining innovative functionality with unparalleled data control. Imagine an assistant that not only manages your emails and schedules but also tackles advanced tasks like financial trading and market research, all without sending your sensitive information to external servers. This isn't just another cloud-based service; ClawdBot is a local powerhouse that puts you in charge of your digital life. In this explainer, you'll discover how ClawdBot's privacy-first design and cross-platform compatibility make it a standout choice for anyone looking to streamline their workflows. From automating file organization to integrating with popular chat platforms like WhatsApp and Slack, ClawdBot offers a level of customization that adapts to your unique needs. But it's not all plug-and-play, setting up ClawdBot requires thoughtful configuration and attention to security. If you're curious about how this AI assistant can transform your productivity while keeping your data safe, this breakdown will show you what's possible and what to watch out for. ClawdBot AI Assistant Key Features Automate Your Life ClawdBot's defining characteristic is its ability to function locally, giving you complete control over your data. Whether you're running it on a Mac Mini, a high-performance RTX 4090 system, or a virtual private server (VPS), ClawdBot adapts to your hardware environment. Its core functionalities include: * Email and Calendar Management: Automate your communications and scheduling to stay on top of your commitments. * File Organization: Automatically rename, sort, and manage files for improved efficiency and reduced clutter. * Chat Platform Integration: Connect seamlessly with platforms like WhatsApp, Telegram, Discord, and Slack to automate communication workflows. * Advanced Applications: Perform tasks such as financial trading and market research for professional and analytical purposes. These features are designed to reduce repetitive tasks, save time, and enhance productivity, making ClawdBot a valuable addition to your toolkit. Flexibility Across Platforms ClawdBot is compatible with a wide range of operating systems, including macOS, Windows, iOS, and Android. This cross-platform compatibility ensures that it integrates smoothly into your existing workflows, regardless of the devices you use. Its adaptability makes it suitable for a variety of environments, from personal laptops to enterprise-level systems. However, the performance of ClawdBot is heavily dependent on your hardware. Tasks such as video editing or data processing require sufficient RAM, CPU power, and reliable network access to function effectively. For users who prefer cloud hosting, ClawdBot can also be deployed on platforms like AWS. The AWS free tier offers an affordable entry point for hosting the assistant, making it accessible even for those with limited resources. ClawdBot The 24/7 AI Agent Employee Learn more about AI assistants by reading our previous articles, guides and features : Security Considerations While ClawdBot's local deployment enhances privacy by keeping data off external servers, it also introduces potential security risks. The assistant requires full access to your local files and system commands to perform its tasks effectively. Without proper safeguards, this level of access could lead to unauthorized actions or data breaches. Misconfigurations or vulnerabilities in the system may expose sensitive information or create opportunities for exploitation. To mitigate these risks, it is essential to implement robust sandboxing measures. By isolating ClawdBot from critical system components, you can ensure that it operates within defined boundaries, minimizing the risk of unauthorized access. Regular updates and security patches should also be applied to address any vulnerabilities that may arise. Setup and Customization ClawdBot's installation process is designed to be user-friendly, featuring command-line tools and guided setup wizards to simplify the process. Once installed, you can customize its functionality through a variety of plugins. These plugins allow you to tailor ClawdBot to your specific needs, enhancing its utility for both personal and professional applications. Examples of integrations include: * Apple Notes: Streamline your note-taking and organization. * Excel: Manage and analyze data efficiently for business or personal projects. * Web Search Tools: Conduct research tasks with ease and precision. This level of customization ensures that ClawdBot can adapt to a wide range of use cases, making it a highly versatile tool. Alternatives in the Market While ClawdBot offers a unique combination of local deployment and open source flexibility, it is not the only AI assistant available. Competitors like Agent Zero have been providing similar or even more advanced features for years. However, ClawdBot's growing popularity can be attributed to its focus on privacy and local control, which sets it apart from many cloud-based solutions. When evaluating ClawdBot, it's important to consider its advantages, such as its ability to operate offline and its customizable nature, against its limitations, including hardware dependencies and potential security risks. This balanced approach will help you determine whether ClawdBot aligns with your specific needs and priorities. Practical Applications ClawdBot's versatility makes it suitable for a wide range of applications, from personal productivity to professional workflows. Common use cases include: * Task Automation: Save time by automating repetitive tasks like file renaming, email management, and calendar scheduling. * Workflow Optimization: Enhance productivity by integrating ClawdBot with apps and APIs that streamline your daily operations. * Creative and Analytical Tasks: Support resource-intensive activities such as video editing, data analysis, and market research. These capabilities demonstrate how ClawdBot can be a valuable tool for individuals and organizations looking to optimize their workflows and increase efficiency. Recommendations and Precautions Before adopting ClawdBot, it is crucial to address its security implications. Proper sandboxing and configuration are essential to prevent unauthorized access or system vulnerabilities. Additionally, ensure that your hardware meets the assistant's resource requirements to avoid performance bottlenecks and ensure smooth operation. By taking these precautions, you can maximize ClawdBot's potential while minimizing risks. Whether you're looking to automate simple tasks or explore advanced applications, ClawdBot offers a powerful and flexible solution to enhance your productivity. Its focus on privacy, control, and adaptability makes it a standout choice for users seeking a reliable AI assistant tailored to their unique needs.
Share
Share
Copy Link
The open-source AI personal assistant Clawdbot has taken AI circles by storm, with users praising its ability to autonomously manage emails, calendars, and system tasks. But security researchers have uncovered hundreds of misconfigured deployments exposing private data, and experts question whether the tool's sweeping access to user credentials and system commands creates more risk than reward.
An open-source AI agent called Clawdbot has rapidly gained viral attention across developer and AI enthusiast communities, drawing both excitement and alarm in equal measure. Created by Pete Steinberger, a developer who claims he "came back from retirement to mess with AI and help a lobster take over the world," Clawdbot represents a new breed of AI personal assistant that goes far beyond simple chatbot interactions
2
. The tool has become so popular that Anthropic requested a name change due to its similarity to the Claude AI chatbot, leading to its rebrand as Moltbot2
.
Source: Geeky Gadgets
What sets this AI agent apart from conventional chatbots is its ability to proactively take actions without user prompts. Clawdbot connects to messaging platforms like WhatsApp, Telegram, Slack, Signal, and Microsoft Teams, acting as a persistent assistant that can automate tasks ranging from clearing inboxes to sending morning briefings
1
. Users have shared stories of personal bots that write code overnight and send cheerful status updates by morning, with some even purchasing dedicated Mac Mini hardware just to host their AI butler1
.Technically, Clawdbot functions as a gateway rather than a standalone model. The self-hosted system runs on your own hardware or cloud instance and routes messages to large language models (LLMs) like Claude or OpenAI's models
3
. It interprets responses and uses them to decide which tools to invoke, maintaining long-term memory and exposing a web-based control interface where users configure integrations and permissions3
.
Source: Mashable
The appeal lies in its comprehensive capabilities. Users can ask Clawdbot to summarize conversations across platforms, schedule meetings, monitor prices, deploy code, or execute commands on servers through natural language
3
. The system features real-world automation through skills that let it run commands, organize files, fill out web forms, and interact with devices, with the community continuously adding more capabilities1
. Running the tool costs approximately $3-$5 per month on a basic Virtual Private Server, with some users finding success on AWS's free tier2
.The excitement surrounding Clawdbot has been tempered by serious security concerns that highlight the inherent risks of agentic AI systems. Security researcher Jamieson O'Reilly documented how misconfigured deployments had left administrative interfaces exposed to the public internet, with hundreds of instances vulnerable to unauthorized access
3
. Of the instances O'Reilly examined manually, eight were completely open with no authentication, exposing full access to run commands and view configuration data4
.
Source: Forrester
These vulnerabilities could allow attackers to access months of private messages, extract API keys, read user credentials, and even execute arbitrary commands on host systems, sometimes with root access
3
. The specific flaw involved a reverse-proxy configuration issue that caused all traffic to be treated as trusted, which has since been patched3
. However, the incident exposed structural risks inherent in systems that require sweeping permissions to function.Beyond misconfigured deployments, O'Reilly revealed another critical vulnerability through a proof-of-concept supply chain exploit targeting ClawdHub, the AI assistant's skills library available on GitHub
4
. He uploaded a publicly available skill, artificially inflated the download count to more than 4,000, and watched as developers from seven countries downloaded the package4
. While O'Reilly's payload was benign, it proved he could execute commands on a Clawdbot instance. "In the hands of someone less scrupulous, those developers would have had their SSH keys, AWS credentials, and entire codebases exfiltrated before they knew anything was wrong," he stated4
.ClawdHub currently states in its developer notes that all code downloaded from the library will be treated as trusted code, with no moderation process in place
4
. This places the burden entirely on developers to properly vet anything they download, creating a significant gap between consumer enthusiasm and the technical expertise needed to operate securely.While Clawdbot may seem appealing for individual power users, security experts warn that AI tools like this become terrifying when mapped into an enterprise threat model
1
. For an AI agent to be useful, it must access private data, store user credentials, execute commands, and maintain persistent state—each requirement undermining assumptions that traditional security models rely on2
.SOCRadar, a threat intelligence platform, recommends treating Clawdbot as "privileged infrastructure" and implementing additional security precautions through proper sandboxing
2
. Eric Schwake, director of cybersecurity strategy at Salt Security, noted that "a significant gap exists between the consumer enthusiasm for Clawdbot's one-click appeal and the technical expertise needed to operate a secure agentic gateway"4
. The concern extends to Shadow IT scenarios, where employees might introduce the tool into corporate environments without proper oversight, creating new attack surfaces.Related Stories
Steinberger himself acknowledges the risks in Clawdbot's documentation. "Running an AI agent with shell access on your machine is... spicy," the FAQ states. "There is no 'perfectly secure' setup"
5
. The tool requires full system access to deliver on its promises, meaning it can read and write files, run commands, execute scripts, and control browsers5
.Support documentation acknowledges that bad actors could "try to trick your AI into doing bad things" and "social engineer access to your data"
5
. Infostealers notes that while keeping data local may seem to enhance security, hackers are finding ways to tap into local data, creating a "honey pot" for commodity malware2
. The rise of local-first AI agents has introduced a highly lucrative attack surface for cybercriminals seeking access to concentrated credentials and sensitive information.Clawdbot's viral success signals both the appeal and the danger of agentic AI systems that promise to transform productivity. The tool has succeeded where many high-profile AI agents failed in 2025, delivering tangible results that resonate with early adopters
5
. Users report that it feels like what AI should do for them—a single assistant across everything that remembers projects, preferences, and quirks while anticipating next steps1
.Yet the security implications cannot be ignored. Even when correctly configured, tools like Clawdbot require sweeping access to function at all, concentrating an extraordinary amount of privilege in a single always-on system
3
. The documentation recommends starting with the smallest access that still works, then widening it as users gain confidence, while limiting who can talk to the bot, where it can act, and what it can touch2
. For organizations, the challenge will be balancing the productivity gains employees seek against the security posture required to protect sensitive systems and data in an era where AI butlers want to manage the entire house.Summarized by
Navi
[3]
[4]
[5]
1
Technology

2
Technology

3
Policy and Regulation
