X, the social media platform owned by Elon Musk, will temporarily suspend creators from its revenue-sharing program if they post AI-generated videos depicting armed conflict without clear disclosure. The policy change, announced Tuesday by X’s head of product Nikita Bier, comes amid heightened tensions following recent strikes in Iran and a surge of related content on the platform.
“During times of war, This proves critical that people have access to authentic information on the ground,” Bier stated in a post on X. He added that advancements in artificial intelligence have made it easier to create content “that can mislead people.” Creators found to have violated the new policy will face a 90-day suspension from Creator Revenue Sharing, with repeat offenses resulting in permanent removal from the program.
Bier clarified that creators can add a “Made with AI” label to their posts through the content disclosure options within the platform’s menu. X will utilize metadata from AI systems, combined with its Community Notes feature – a crowd-sourced fact-checking tool – to identify potentially misleading AI-generated content, according to Bier.
Launched in mid-2023, X’s Creator Revenue Sharing program allows eligible creators – those with X Premium subscriptions or verified organizations boasting at least 5 million organic impressions and 500 verified followers in the past three months – to monetize their content based on engagement metrics. The shift to engagement-based payouts occurred in 2024, replacing a previous system based on ad impressions.
The policy update follows a recent investigation by Wired, which reported a proliferation of misinformation surrounding the conflict in Iran on X. The report detailed instances of old videos being presented as current events, outright disinformation regarding attacks, and the widespread sharing of AI-generated imagery, even by the Iranian newspaper Tehran Times.
X has already faced challenges with AI-generated content. In late December and early January, the platform was inundated with nonconsensual, sexualized deepfakes created using Grok AI, X’s embedded artificial intelligence chatbot. While X subsequently adjusted its rules, the changes were not comprehensive, and users continue to find ways to generate such content through various Grok interfaces.
The move to address AI-generated misinformation coincides with X’s efforts to reassure advertisers concerned about brand safety. Last week, the company hosted a webinar for advertisers, promoting its capabilities in this area, according to ADWEEK. X’s advertising revenue has fallen to roughly half of what it was before Elon Musk’s acquisition of the platform in late 2022 for $44 billion, according to data from Emarketer.
X did not respond to a request for comment.