X Context Notes Face Uphill Battle for Publication
Study reveals most user-generated content fails to reach public view.
A recent study raises concerns about the effectiveness of X’s (formerly Twitter) community notes feature in combating online disinformation, revealing that over 90% of context notes submitted by users never get published.
Voting System Roadblock
On X, users have the option to suggest a community note
below a post in order to give context or point out a factual inaccuracy. Other users then vote whether they find the note helpful; if the note receives sufficient positive votes, it becomes visible to all users below the original post.
According to the Digital Democracy Institute of the Americas (DDIA), after analyzing 1.76 million notes proposed between January 2021 and March 2025, the vast majority of subject notes – more than 90% – never reach the public.
For a system promoted to fast, easy to extend and transparent, these figures should raise serious concerns,
the DDIA stated in their study.
Declining Visibility
The DDIA reports that in 2023, 9.5% of English-language notes were published, but by early 2025, that number had fallen to only 4.9%. Conversely, notes written in Spanish saw an increase in publication rates.
The study suggests that a significant proportion of unpublished notes don’t achieve consensus during the voting stage, while others are never put up for a vote at all. The researchers believe the increasing number of notes may be creating a bottleneck
that reduces their visibility, leading to many being lost in the limbo, invisible and not evaluated
by the community.
Community Notes’ Growing Popularity
Despite the low publication rate on X, the community notes
concept, launched while **Linda Yaccarino** was director (she resigned on Wednesday), is gaining traction, with competitors like TikTok and Meta (Facebook and Instagram) considering similar systems.
In fact, a Pew Research Center study from earlier this year found that 65% of U.S. adults believe social media companies have a responsibility to remove false information, even if it means restricting freedom of speech (Pew Research Center, 2024).
Meta’s interest follows its decision earlier this year to discontinue its content verification program in the United States, with **Mark Zuckerberg** likening the program to censorship,
echoing language used by the Republican party.
The European Union, having established the Digital Services Act (DSA) to combat illegal content and disinformation online, may soon need to clarify the specific obligations of social networks regarding content moderation.