The Hidden Human Labor Powering Meta’s AI
Scale AI, a firm 49% controlled by Meta, is utilizing a gig-work platform called Outlier to scrape personal social media data and copyrighted content for AI training. This practice has exposed “taskers” to traumatic material, sparking class-action lawsuits over mental health impacts and systemic privacy violations across the internet.
The polished image of generative AI—the seamless assistants and instant art—relies on a hidden, gritty underbelly of human labor. Although Meta markets its Meta AI assistant as a tool for productivity and creativity, the actual “cleaning” of the data that feeds these models is a visceral, often disturbing process. We are seeing a disturbing divergence between the high-level academic expertise these firms claim to recruit and the actual tasks being performed in the digital trenches.
The Bait and Switch of the ‘Expert’ Tasker
Scale AI recruited a sophisticated workforce for its Outlier platform. They targeted experts in medicine, physics, and economics, promising flexible work where these professionals could “become the expert that AI learns from.” The goal was ostensibly to refine top-tier artificial intelligence systems. However, the reality for many workers was far less academic.

Instead of refining complex physics equations or medical diagnoses, tens of thousands of gig workers found themselves combing through Instagram accounts, harvesting copyrighted works, and transcribing the soundtracks of pornographic videos. The tasks shifted from intellectual refinement to the raw, manual scraping of personal data and the moderation of “depraved” content. This creates a massive liability for the workers involved, who are often left without psychological support while processing the worst corners of the web.
“Each of the plaintiffs has suffered mental and emotional trauma from being exposed to depraved content in service of Scale AI’s various projects.”
This quote from Glenn Danas, lead counsel for the plaintiffs, underscores a growing crisis in the AI supply chain. For those suffering from workplace-induced trauma, securing specialized employment lawyers is no longer a luxury, but a necessity to hold these algorithmic giants accountable for the mental health of their invisible workforce.
The Power Nexus: From the Pentagon to Meta
The architecture of Scale AI is not merely a corporate entity; it is a bridge between Silicon Valley, the US defense apparatus, and the highest levels of government. The company maintains active contracts with the Pentagon and various US defense firms, positioning AI training as a matter of national security.
The revolving door of leadership further complicates the ethical landscape. Alexandr Wang, the former CEO of Scale AI and once the world’s youngest self-made billionaire, now serves as Meta’s chief AI officer. Meanwhile, Michael Kratsios, the former managing director of Scale AI, has moved into the role of science adviser to US President Donald Trump. This tight integration of private AI training, government surveillance, and social media dominance means that the data being scraped—often without consent—is flowing into systems with immense geopolitical influence.
When personal profiles and copyrighted works are absorbed into these systems, the legal ramifications extend beyond simple copyright infringement. It becomes a question of systemic privacy erosion. Individuals and businesses are increasingly turning to data privacy consultants to audit their digital footprints and implement protections against this aggressive harvesting.
The Automated Machine: Meta External Agent
While human “taskers” handle the nuanced and disturbing content, Meta has simultaneously deployed an automated army. The Meta External Agent is a web crawler designed to scour the internet and collect data en masse to feed the Llama large language model. Unlike the human taskers who feel the moral weight of their work, this bot operates with cold efficiency, copying text from news articles and conversations from online discussion groups.
Meta has attempted to frame this as a standard industry practice, citing that they train models on “publicly available” content. However, the distinction between “publicly available” and “consented for AI training” is the central battlefield of current intellectual property law. The use of the Meta External Agent represents a shift toward total data ingestion, where the internet is viewed not as a collection of human expressions, but as a raw mineral deposit to be mined.
A Legal Reckoning in the Courts
The tension between AI progress and human rights has finally reached a breaking point. A nationwide class-action lawsuit filed in January 2025 targets Scale AI, Outlier AI, and Smart Ecosystem. The suit alleges that these companies—performing work for both Google and Meta—exposed independent workers to traumatic content without adequate warnings or protections.
This litigation is critical because it challenges the “independent contractor” shield that gig-economy firms use to avoid providing health benefits or mental health support. By categorizing these workers as taskers rather than employees, Scale AI attempted to outsource the psychological cost of AI training.
As these cases move through the US court system, the precedent set will determine whether AI firms are responsible for the “human cost” of their data cleaning. For workers caught in these systems, coordinating with class action attorneys is the only viable path to securing reparations for the mental trauma endured in the name of technological advancement.
The “magic” of AI is a mirage. Behind every fluid response from a chatbot is a trail of scraped Instagram photos, stolen copyrighted prose, and a workforce of traumatized humans staring at the most depraved images the internet has to offer. We are building the future on a foundation of digital exploitation and psychological attrition. The question is no longer whether AI can learn, but what we are willing to sacrifice to teach it. As the legal battles intensify, finding verified, ethical professionals to navigate this new landscape of privacy and labor law is the only way to ensure that human dignity isn’t the price of innovation. The World Today News Directory remains the primary resource for connecting affected parties with the legal and professional expertise required to fight back.
