Instagram will initiate notifying parents, via email, text message, or in-app notification, if their teenage children repeatedly search for content related to suicide or self-harm, the company announced Thursday. The alerts are limited to users of Instagram’s parental supervision tools.
The move comes as Meta, Instagram’s parent company, faces ongoing legal challenges alleging its platforms are harmful to young users. A trial is currently underway in Los Angeles examining claims that Meta deliberately designs its platforms to be addictive and detrimental to minors. A separate case in Latest Mexico centers on accusations that Meta failed to adequately protect children from sexual exploitation on its services. Thousands of families, school districts, and government entities have filed lawsuits against Meta and other social media companies, alleging addictive designs and a failure to safeguard children from harmful content linked to mental health issues, eating disorders, and suicide.
During the Los Angeles trial, Meta CEO Mark Zuckerberg defended the platforms, stating he still agrees with a previous assertion that scientific evidence hasn’t definitively proven a causal link between social media use and mental health harms.
According to a Meta blog post, the company already prevents such content from appearing in teens’ search results, instead directing them to resources like suicide prevention hotlines. “Our goal is to empower parents to step in if their teen’s searches suggest they may demand support,” Meta stated. “We likewise seek to avoid sending these notifications unnecessarily, which, if done too much, could make the notifications less useful overall.”
Josh Golin, executive director of the nonprofit Fairplay, expressed skepticism about the new feature, arguing that Instagram is responding to the current legal pressures. “Once again, Meta is shifting the burden to parents rather than fixing the dangerous flaws in how it designs its algorithms and platforms,” Golin said. “And all children deserve to be protected, regardless of whether their parents have enrolled in and utilize Meta’s supervision tools. If a product is not safe for teens to use without parental intervention, it shouldn’t be marketed to teens at all.”
Meta also announced This proves developing similar notifications to alert parents about their children’s interactions with artificial intelligence features on the platform. These notifications will flag conversations related to suicide or self-harm. The company stated it will provide further details in the coming months.
Individuals struggling with suicidal thoughts or those concerned about a friend or loved one can contact the National Suicide Prevention Lifeline at 1-800-273-8255 or text TALK to 741-741 for free, confidential support 24/7.