On March 10, 2026, the official Xiaohongshu account "Shu Guanjia" officially announced a special governance campaign targeting "AI-managed accounts" across the entire platform. This move aims to crack down on the use of technical means to simulate real people for non-authentic content creation and fake interactions, in order to uphold the core principle of the community of "genuine sharing."

According to monitoring, some accounts have recently started using AI management models, generating notes through automated technology, and faking real person interactions in social scenarios such as comments, private messages, and group chats. In response to such behavior that disrupts the ecosystem, Xiaohongshu has clearly defined a tiered handling rule: for ordinary accounts that occasionally use AI-managed writing, posting, or interaction, the platform will take measures such as warnings and restricted distribution based on the severity of the violation; for accounts fully driven by AI tools for registration, operation, or with all pages filled with AI-written content, the platform will directly ban them.
In the context of AIGC technology deeply penetrating the content industry, Xiaohongshu's statement is not a complete rejection of AI tools, but rather emphasizes the boundary of "real people sharing real experiences." The platform calls on users to use auxiliary design or editing tools reasonably, but strictly prohibits automation without human subjectivity. From an industry perspective, as large models like OpenClaw drive an automation interaction trend, social platforms are facing unprecedented challenges in authenticity.
Xiaohongshu's strict governance actions indicate that the regulatory focus of content communities is shifting from a single "content review" to a deep penetration into the "authenticity of account behavior," which holds significant benchmark value for building a digital trust system in the era of human-computer collaboration.
