The AI project OpenClaw (formerly known as ClawdBot, Moltbot), aimed at simplifying users' lives, has recently been caught in a persistent "whack-a-mole" security dilemma. According to The Register, multiple projects within this ecosystem have been facing serious challenges, including the takeover of robot control and remote code execution (RCE) vulnerabilities.

Recently, Mav Levin, founder of the security research firm DepthFirst, disclosed a highly dangerous "one-click RCE" vulnerability chain. Attackers can exploit an unverified WebSocket source flaw on the OpenClaw server, trick victims into visiting malicious web pages, and complete the attack in milliseconds. This vulnerability allows attackers to bypass sandboxes and user confirmation prompts, directly executing arbitrary code on victims' systems. Although the OpenClaw team has quickly fixed this vulnerability, the overall security of the ecosystem remains questionable.
Just when one problem seemed to be resolved, another emerged. The AI agent social network Moltbook, closely related to OpenClaw, was also exposed to severe database exposure issues. Security researcher Jamieson O’Reilly found that due to misconfiguration, the platform's database was completely publicly accessible, leading to the leakage of a large number of confidential API keys.
This means attackers can impersonate any AI agent registered on the platform (such as the personal AI agent of AI expert Andrej Karpathy) to post false information, scams, or radical content. Although Moltbook is not an official project of OpenClaw, many OpenClaw users have connected agents with the ability to read SMS and manage inboxes to this platform, making the potential security risks evident.
Key points:
🚨 High-risk vulnerabilities frequently occur: OpenClaw just fixed a remote code execution (RCE) vulnerability that could be triggered by clicking a link. This vulnerability exploited a WebSocket verification flaw.
🔑 Massive keys exposed: The AI social platform Moltbook's database was accessed publicly due to misconfiguration, putting API keys of AI agents, including those of well-known experts, at risk of exposure.
⚠️ Security awareness warning: Researchers pointed out that due to the pursuit of rapid iteration, such projects often neglect basic security audits during development, posing significant risks to user data security.
