Gabon suspended access to Facebook, TikTok, and YouTube on Monday, citing regulatory concerns, according to reports from Al Jazeera. The move comes as governments worldwide grapple with the proliferation of artificial intelligence-generated content, often referred to as “AI slop,” and its potential impact on public discourse and youth well-being.
The suspension in Gabon follows a similar pattern of scrutiny faced by major social media platforms. In Türkiye, authorities have launched investigations into TikTok, Instagram, and YouTube regarding their handling of children’s data, according to Türkiye Today. These probes center on concerns about data privacy and the potential for harmful content to reach young users.
The rise of AI-generated content is exacerbating these challenges. A recent report by the Pew Research Center, detailed in their 2025 study on American social media use, highlights the increasing presence of AI-created material on platforms like Facebook, TikTok, and YouTube. This “AI slop” – ranging from fabricated news stories to manipulated images and videos – is becoming increasingly difficult to distinguish from authentic content.
Concerns are mounting that this influx of AI-generated misinformation is eroding trust in journalism and potentially manipulating public opinion. According to reporting from German tech news site heise+, the sheer volume of AI-generated content is overwhelming platforms’ ability to moderate it effectively. The algorithms that drive these platforms often prioritize engagement, inadvertently rewarding the spread of sensational and often false information.
The legal framework for addressing this issue remains underdeveloped. Even as the European Union has enacted legislation such as the Digital Services Act (DSA) and the AI Act, their effectiveness in curbing the spread of AI slop is still uncertain, according to heise+. The report suggests that platform operators possess the tools to mitigate the problem but are sometimes reluctant to implement them fully.
Meta, TikTok, and YouTube are currently facing legal challenges related to youth addiction claims, Reuters reported. The trial will examine allegations that these platforms are designed to be addictive, potentially harming the mental health of young users. This legal action underscores the growing pressure on social media companies to address the negative consequences of their platforms.
The situation is further complicated by the lack of political and legal preparedness for the widespread use of generative AI, according to heise+. The report suggests that governments are struggling to maintain pace with the rapid advancements in AI technology and the resulting challenges to information integrity.