Home » Technology » Hard labour conditions of online moderators directly affect how well the internet is policed – new study

Hard labour conditions of online moderators directly affect how well the internet is policed – new study

Content Moderation Compromised by Pressure on Workers

Efficiency and job security trump thoroughness in crucial online safety roles

The human element behind big tech’s content moderation is under immense pressure, with researchers revealing that demanding work conditions and tight deadlines significantly impact the accuracy and appropriateness of decisions made by outsourced moderators.

Efficiency Over Accuracy Drives Decisions

Content moderators, often based in countries like India and the Philippines, grapple with enormous volumes of harmful material daily. They operate with minimal mental health support and are bound by strict non-disclosure agreements, creating an environment where speed can overshadow meticulous judgment.

A study published in *New Media & Society* highlights how high productivity targets push moderators to prioritize quick actions over more nuanced tasks. One audio moderator explained her reluctance to de-rank content, a process requiring four steps, in favor of quicker actions like ending live streams or removing posts, which take only two. This often means problematic content, such as impersonations flagged for reduced visibility, remains online until addressed by another moderator.

“Would never recommend de-ranking content as it would take time.”

—A 28-year-old audio moderator working for an Indian social media platform

Automation Complicates Contextual Judgments

While automation tools are designed to assist moderators by flagging potential violations, the pressure to meet rapid decision-making quotas means these tools are often followed mechanically. This can lead to decontextualized judgments, as moderators may remove flagged text without assessing its intent or the surrounding conversation.

Instructions to moderators can be blunt, such as “Ensure that none of the highlighted yellow words remained on the profile.” Researchers observed instances where moderators removed words based on automated flagging without any contextual review. The very nature of how content is queued by these tools can detach it from its original context, making accurate human assessment even more challenging.

Job Security Dictates Moderation Strategies

The precarious nature of content moderation employment compels workers to develop strategies that ensure job security. This often involves simplifying complex moderation policies into easily remembered rules for unambiguous violations, which can dilute expansive guidelines and impede thorough decision-making.

One work group message stated: “If you guys can’t do the work and complete the targets, you may leave.” This kind of communication underscores the constant threat of job loss, forcing moderators to prioritize speed and adherence to simplified rules over deep analysis.

Urgent Need for Labor Reform in Content Moderation

The findings underscore that platform policies alone are insufficient for effective content moderation. The economic realities and employment practices within the moderation industry are as critical to trust and safety as the policies themselves. Platforms must improve working conditions, address economic pressures like strict performance targets, and enhance job security to ensure consistent and thorough moderation.

Greater transparency is needed regarding how much platforms invest in human labor for trust and safety. A 2024 report by the Pew Research Center found that 70% of Americans believe social media companies have not done enough to combat misinformation, highlighting a persistent gap between platform claims and public perception (Pew Research Center, 2024).

Redesigning Tools for Better Moderation

Beyond employment conditions, platforms should also re-evaluate their moderation tools. Integrating readily accessible rulebooks, implementing violation-specific content queues, and standardizing enforcement action steps could streamline decision-making. This would reduce the tendency for moderators to opt for faster, albeit less thorough, methods simply to meet targets.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.