Social media platform X (formerly Twitter) announced on Tuesday that it will take strict measures against content creators who post videos related to armed conflicts and do not clearly label them as AI-generated. Nikita Bier, a product executive at X, explicitly stated that any behavior involving the use of artificial intelligence technology to mislead others will result in users being removed from the creator revenue-sharing program, with first-time offenders facing a 90-day ban. If violations occur again after the suspension period, the platform will impose a permanent ban from the revenue-sharing program.

Bier pointed out that the accuracy of real-time information is crucial during critical periods such as wars, and current AI technology has lowered the threshold for creating misleading content, requiring platforms to intervene. To accurately identify violations, X will combine generative AI detection tools with the crowdsourced fact-checking system "Community Notes." This policy adjustment aims to address the side effects of the creator revenue-sharing program. Previously, the program was controversial because it encouraged the posting of sensational and emotionally provocative content, with critics arguing that its lenient control mechanisms exacerbated the spread of misinformation.
Although this move marks progress in X's content governance transparency, market opinions suggest that the measures still have limitations. The new rules currently focus only on the "armed conflict" area, while political misinformation and deceptive AI product promotions in the influencer economy are not subject to the same level of restrictions. In an era where AI media is easily weaponized, balancing the economy of creators with the authenticity of information remains a long-term challenge for global social platforms.
