This weblog is written in collaboration by Amy Chang, Vineeth Sai Narajala, and Idan Habler
Over the previous few weeks, Clawdbot (now renamed Moltbot) has achieved virality as an open supply, self-hosted private AI assistant agent that runs domestically and executes actions on the consumer’s behalf. The bot’s explosive rise is pushed by a number of components; most notably, the assistant can full helpful day by day duties like reserving flights or making dinner reservations by interfacing with customers via common messaging functions together with WhatsApp and iMessage.
Moltbot additionally shops persistent reminiscence, that means it retains long-term context, preferences, and historical past throughout consumer classes reasonably than forgetting when the session ends. Past chat functionalities, the software also can automate duties, run scripts, management browsers, handle calendars and e mail, and run scheduled automations. The broader neighborhood can add “expertise” to the molthub registry which increase the assistant with new talents or connect with totally different providers.
From a functionality perspective, Moltbot is groundbreaking. That is every thing private AI assistant builders have at all times wished to attain. From a safety perspective, it’s an absolute nightmare. Listed below are our key takeaways of actual safety dangers:
- Moltbot can run shell instructions, learn and write recordsdata, and execute scripts in your machine. Granting an AI agent high-level privileges allows it to do dangerous issues if misconfigured or if a consumer downloads a talent that’s injected with malicious directions.
- Moltbot has already been reported to have leaked plaintext API keys and credentials, which will be stolen by menace actors through immediate injection or unsecured endpoints.
- Moltbot’s integration with messaging functions extends the assault floor to these functions, the place menace actors can craft malicious prompts that trigger unintended conduct.
Safety for Moltbot is an choice, however it’s not inbuilt. The product documentation itself admits: “There isn’t any ‘completely safe’ setup.” Granting an AI agent limitless entry to your information (even domestically) is a recipe for catastrophe if any configurations are misused or compromised.
“A really explicit set of expertise,” now scanned by Cisco
In December 2025, Anthropic launched Claude Abilities: organized folders of directions, scripts, and sources to complement agentic workflows. the power to boost agentic workflows with task-specific capabilities and sources, the Cisco AI Menace and Safety Analysis staff determined to construct a software that may scan related Claude Abilities and OpenAI Codex expertise recordsdata for threats and untrusted conduct which are embedded in descriptions, metadata, or implementation particulars.
Past simply documentation, expertise can affect agent conduct, execute code, and reference or run further recordsdata. Current analysis on expertise vulnerabilities (26% of 31,000 agent expertise analyzed contained not less than one vulnerability) and the speedy rise of the Moltbot AI agent offered the proper alternative to announce our open supply Talent Scanner software.
We ran a weak third-party talent, “What Would Elon Do?” in opposition to Moltbot and reached a transparent verdict: Moltbot fails decisively. Right here, our Talent Scanner software surfaced 9 safety findings, together with two vital and 5 excessive severity points (outcomes proven in Determine 1 beneath). Let’s dig into them:
The talent we invoked is functionally malware. One of the extreme findings was that the software facilitated lively information exfiltration. The talent explicitly instructs the bot to execute a curl command that sends information to an exterior server managed by the talent writer. The community name is silent, that means that the execution occurs with out consumer consciousness. The opposite extreme discovering is that the talent additionally conducts a direct immediate injection to power the assistant to bypass its inner security pointers and execute this command with out asking.
The excessive severity findings additionally included:
- Command injection through embedded bash instructions which are executed via the talent’s workflow
- Device poisoning with a malicious payload embedded and referenced throughout the talent file

Determine 1. Screenshot of Cisco Talent Scanner outcomes
It’s a private AI assistant, why ought to enterprises care?
Examples of deliberately malicious expertise being efficiently executed by Moltbot validate a number of main considerations for organizations that don’t have applicable safety controls in place for AI brokers.
First, AI brokers with system entry can develop into covert data-leak channels that bypass conventional information loss prevention, proxies, and endpoint monitoring.
Second, fashions also can develop into an execution orchestrator, whereby the immediate itself turns into the instruction and is tough to catch utilizing conventional safety tooling.
Third, the weak software referenced earlier (“What Would Elon Do?”) was inflated to rank because the #1 talent within the talent repository. You will need to perceive that actors with malicious intentions are capable of manufacture recognition on prime of present hype cycles. When expertise are adopted at scale with out constant evaluation, provide chain danger is equally amplified because of this.
Fourth, in contrast to MCP servers (which are sometimes distant providers), expertise are native file packages that get put in and loaded instantly from disk. Native packages are nonetheless untrusted inputs, and among the most damaging conduct can cover contained in the recordsdata themselves.
Lastly, it introduces shadow AI danger, whereby workers unknowingly introduce high-risk brokers into office environments beneath the guise of productiveness instruments.
Talent Scanner
Our staff constructed the open supply Talent Scanner to assist builders and safety groups decide whether or not a talent is secure to make use of. It combines a number of highly effective analytical capabilities to correlate and analyze expertise for maliciousness: static and behavioral evaluation, LLM-assisted semantic evaluation, Cisco AI Protection inspection workflows, and VirusTotal evaluation. The outcomes present clear and actionable findings, together with file areas, examples, severity, and steering, so groups can determine whether or not to undertake, repair, or reject a talent.
Discover Talent Scanner and all its options right here: https://github.com/cisco-ai-defense/skill-scanner
We welcome neighborhood engagement to maintain expertise safe. Think about including novel safety expertise for us to combine and interact with us on GitHub.

