Meta‘s Tightrope Walk: balancing Growth and Youth Safety on Social Media
Meta is rolling out new features aimed at increasing safety for young users across its platforms, including Instagram and Facebook. These changes are being implemented amidst growing scrutiny and potential regulation, particularly in the EU.
The new measures include:
* Default private accounts for teens – Only approved followers can see content.
* Contact restrictions – Messages are limited to users the teen already follows.
* Sleep mode – Notifications are silenced between 10 p.m. and 7 a.m.
* Parental control – Provides insights into a teen’s contacts and screen time (but does not access message content).
Meta frames these steps as proactive protection, but critics suggest they are a preemptive move to avoid stricter government regulation, especially under the EU’s Digital Services Act, which demands demonstrable safety measures. A key challenge is the ease with which young people can bypass these protections by providing a false date of birth. Meta is exploring AI-powered age estimation to address this, a solution that raises new data privacy concerns.
Internal Documents Reveal Prioritization of Growth
Recently unsealed court documents from a US class action lawsuit have cast doubt on Meta’s commitment to safety. Reports from time magazine and CBS News detail how internal warnings regarding child safety were reportedly ignored. Specifically, the company was allegedly slow to delete accounts linked to sexual predators and hesitant to implement warning systems designed to detect “grooming” behavior. The stated reason? Concerns that prioritizing safety would negatively impact user growth.
Testimony from a former security manager indicates that the threshold for suspending accounts involved in child exploitation was set unreasonably high, suggesting that the “lifetime value” of young users was prioritized over their safety. This directly contradicts Meta’s current public stance as a responsible platform.
Shifting Obligation: Platform vs. App Store
Meta has proposed shifting the responsibility for age verification to Apple and Google.This strategy is seen as a way to minimize data collection, present a privacy-pleasant image, and transfer the operational burden and potential liability to othre companies.
Though, governments in Australia and the EU maintain that platforms like Instagram bear the primary responsibility for protecting their users. They argue that Instagram’s algorithmic-driven business model necessitates safety mechanisms within the platform itself, rather than relying solely on app store verification. experts also point out that technical solutions like teen accounts are easily circumvented without robust age verification.
Looking Ahead: Increased Regulation and Potential changes
The coming months will be crucial, particularly if Australia’s proposed legislation passes. This law would trigger a twelve-month implementation phase, closely monitored globally.
For EU users,potential changes include:
* More frequent age verification requests – possibly involving video selfies or ID uploads.
* Expansion of teen accounts – extending the model to Facebook and Threads.
* algorithm adjustments – adapting recommendation systems for minors under pressure from EU regulators.
2025 could mark a turning point, potentially ending the era of rapid, unregulated growth for social media and ushering in a period of stricter, state-imposed digital limits for young people. The central question remains whether these changes are being implemented quickly enough to adequately protect vulnerable users.
Further Details: A free e-book on the EU AI regulation is available, providing details on labeling requirements, risk classes, and implementation steps: https://www.datenschutz-praemien.de/ki-verordnung/?af=KOOP_MFW_DSN_DNV_YES_KI-VERORDNUNG_X-CWAHN-BGPID_682303