March 25, 2026 news: Anthropic's Claude Code has officially launched the Auto Mode today. This upgrade allows Claude to autonomously assess the safety of code operations: safe operations are executed directly, while risky operations are automatically intercepted and user opinions are requested, completely ending the cumbersome experience of "manual confirmation for every step" and the "dangerous driving" mode.
Auto Mode is not simply "fully released" Unlike the previous conservative mode that required users to approve each operation individually, Auto Mode has a dedicated classifier model that strictly reviews each operation before Claude attempts to execute it. This classifier evaluates potential risks in real-time, ensuring that the AI remains within a safe boundary while operating efficiently.
Four Core Risks Automatically Intercepted The classifier focuses on scanning the following four high-risk behaviors:
Mass file deletion
Sensitive data leakage
Malicious code execution
Prompt injection attacks (i.e., malicious instructions hidden within the content the AI is processing)
Clear and Efficient Priority Decision Logic The system uses a layered decision mechanism:
First, check if there are explicit blocking rules (soft_deny);
Second, check if there are explicit allow rules (allow);
Finally, evaluate whether the user's intention is clear enough.
If Claude repeatedly attempts blocked operations, the system will automatically pop up a window to remind the user to intervene, ensuring that ultimate control always remains with the user.
The launch of Auto Mode marks another breakthrough for Claude Code in the field of code intelligents. Developers will be able to focus more on business logic without spending too much energy on security verification. AIbase will continue to monitor the actual performance of this mode after its launch and provide more in-depth evaluations as soon as possible.
