A digital disaster triggered by an AI assistant once again sounded the alarm on the security of automation tools. On December 8th, developer LovesWorkin posted on Reddit lamenting: he only wanted to use Anthropic's Claude CLI programming tool to clean up an old code repository, but due to a command generated by the AI, his entire Mac computer's home directory was completely erased - the desktop, documents, download folder, keychain, application data, and even Claude's own credentials were all gone, and years of accumulated work almost came to zero.

image.png

The problem lay in this command executed by Claude CLI:   

`rm -rf tests/ patches/ plan/ ~/`

On the surface, this is deleting several project directories, but the fatal error lies at the end `~/` — in Unix/Linux systems, `~` represents the current user's home directory (e.g., `/Users/username`). And `rm -rf` is a "force recursive delete" command, which once executed, will permanently erase all content under the specified path without any confirmation.

This means that Claude not only deleted the project files, but also deleted:

- ~/Desktop (the entire desktop)

- ~/Documents (all documents)

- ~/Downloads (downloaded content)

- ~/Library/Keychains (system keychains, including passwords and certificates)

- ~/.claude (the AI tool's own configuration)

- and almost all personal data in the user's directory

Afterward, the developer checked the logs and realized what had happened, but it was too late: clicking on any folder popped up a cold prompt saying "The directory has been deleted." He admitted, "I thought I was just deleting a few test files, but the AI deleted my life as well."

`rm -rf` has long been notorious in the developer community as a "nuclear-level" command; one misstep can cause system crashes or data destruction. Over the years, countless programmers have encountered similar tragedies due to typos or script errors. Now, when AI begins to autonomously generate and execute such high-risk operations, the risk is multiplied — it doesn't "hesitate," nor does it "confirm," it simply mechanically executes the instructions.

Although Claude CLI is positioned as an auxiliary programming tool, intended to improve efficiency, this incident exposed serious shortcomings in its safety protections: it did not intercept high-risk operations involving the user's home directory, did not provide a preview of the operation, and had no secondary confirmation mechanism. In today's era where AI is increasingly deeply integrated into development processes, such a "trust but don't verify" design is akin to handing a bomb to the user.

As of now, Anthropic has not made an official response to this incident. However, the community has been in an uproar, with many developers calling for: all AI tools with command execution capabilities must include a "dangerous operation circuit breaker mechanism," especially implementing mandatory sandboxing or manual confirmation for destructive commands such as `rm -rf`, `format`, and `dd`.