Viral Moltbot AI Assistant Faces Serious Security Risks as Hackers Expose Vulnerabilities

17 Sources

Share

The open-source AI assistant Moltbot has captured Silicon Valley's attention with its ability to manage emails, calendars, and messaging apps. But security researchers warn of critical vulnerabilities including exposed admin interfaces, credential theft, and supply-chain attacks. With 22% of enterprise employees already using it, the risks extend far beyond individual users.

Moltbot Captures Silicon Valley's Attention With Bold Promises

An open-source AI assistant called Moltbot has achieved viral status in recent weeks, accumulating nearly 90,000 stars on GitHub and becoming one of the fastest-growing projects on the platform

1

. Created by Austrian developer Peter Steinberger, the tool bills itself as "the AI that actually does things," promising to manage virtually every aspect of users' digital lives

1

. Originally named Clawdbot as a nod to Anthropic's Claude chatbot, Steinberger changed the name in January 2025 after receiving legal pressure from the company

1

5

.

Source: VentureBeat

Source: VentureBeat

Unlike cloud-based chatbots, Moltbot runs locally on individual computers and integrates directly with messaging platforms including WhatsApp, iMessage, Telegram, Discord, Slack, and Signal

1

. The personal AI agents can monitor calendars and accounts to proactively send alerts, representing what some enthusiasts describe as a fundamental shift in consumer-facing AI

1

. The assistant's always-on nature and ability to take initiative has driven such enthusiasm that it reportedly spiked Cloudflare's stock price by 14% because the chatbot uses Cloudflare's infrastructure

3

5

.

Critical Security Risks Emerge From Extensive System Access

Security researchers have identified severe vulnerabilities in Moltbot deployments that expose users to significant data security risks. The AI assistant requires extensive system access to function, including full shell access, the ability to read and write files across systems, and access to connected apps including email, calendar, and web browsers

3

. This level of access means the agent can execute arbitrary commands on users' computers

3

.

Source: Inc.

Source: Inc.

Pentester Jamieson O'Reilly discovered hundreds of exposed admin interfaces online due to reverse proxy misconfiguration

2

. Because the system auto-approves "local" connections, deployments behind reverse proxies often treat all internet traffic as trusted, allowing unauthenticated access to sensitive data

2

. O'Reilly found one instance where someone had set up their Signal account on a public-facing server with full read access, exposing encrypted messages to anyone who discovered the endpoint

2

.

The insecure deployments can lead to leaking API keys, OAuth tokens, conversation history, and credentials stored in plaintext under ~/.clawdbot/

2

4

. Token Security reports that 22% of its enterprise customers have employees actively using Moltbot, likely without IT approval, creating significant corporate exposure

2

.

Supply-Chain Attacks Demonstrate Vulnerability of Skills Marketplace

One of the most alarming demonstrations involved supply-chain attacks through MoltHub, the registry where developers share "skills" that augment the assistant with new capabilities

2

4

. O'Reilly created a skill called "What Would Elon Do" and artificially inflated its download count to make it appear trustworthy

5

. The malicious skill accumulated over 4,000 downloads and became the most popular on the platform

5

.

Source: 404 Media

Source: 404 Media

In less than eight hours, 16 developers across seven countries downloaded the skill

2

. While O'Reilly's demonstration was harmless, Cisco's AI Threat and Security Research team analyzed the skill and identified nine security findings, including two critical issues involving data exfiltration and direct prompt injection that bypassed safety guidelines

4

. The skill explicitly instructed the bot to execute curl commands that silently sent data to external servers controlled by the skill author

4

.

Cisco developed an open-source Skill Scanner tool in response to these threats, noting that recent research found 26% of 31,000 agent skills analyzed contained at least one vulnerability

4

. The lack of sandboxing by default means Moltbot has the same complete access to data as the user, creating what Cisco researchers called "a security nightmare"

4

.

Prompt Injection Attacks Expand the Threat Surface

The AI assistant's integration with multiple messaging apps creates additional vulnerability to prompt injection attacks, where malicious actors craft messages that trick the model into ignoring safety guidelines and performing unauthorized actions

3

. This extended attack surface means bad actors have more pathways to potential entry through platforms like WhatsApp and Telegram

1

.

Security firms including 1Password, Intruder, and Hudson Rock have issued warnings about Moltbot

2

. According to Intruder, some attacks have already targeted exposed Moltbot endpoints for credential theft and prompt injection

2

. Hudson Rock warned that info-stealing malware like RedLine, Lumma, and Vidar will likely adapt to target Moltbot's local storage

2

.

Crypto scammers have already attempted to exploit Moltbot's popularity by hijacking the project name on GitHub and launching fake tokens

3

. A separate malicious VSCode extension impersonating Clawdbot was caught installing ScreenConnect RAT on developers' machines

2

.

Safe Deployment Requires Isolation and Technical Expertise

Experts emphasize that deploying Moltbot safely requires technical knowledge and careful configuration. The product documentation itself acknowledges there is no "perfectly secure" setup

4

. Security professionals recommend isolating the AI instance in a virtual machine and configuring firewall rules for internet access rather than running it directly on the host operating system with root access

2

.

Many users have been mitigating security risks by running Moltbot on dedicated hardware, particularly the 2024 M4 Mac Mini, which has seen increased sales driven by demand for hosting the assistant

1

2

. This approach creates a silo where the AI runs separately from personal or work computers containing sensitive credentials

1

.

When asked whether the security issues are solvable, O'Reilly told 404 Media that "manageable" is a better term than "solvable," noting that fundamental tensions exist between agent autonomy and security

5

. The core challenge facing AI developers remains building agents with broad utility while maintaining security, and Moltbot's viral popularity highlights how enthusiasm for capability can outpace consideration of risk. For enterprises, the fact that nearly a quarter of employees may already be using such tools without approval signals an urgent need for policies governing personal AI agents in workplace environments.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo