Approximately 12% of U.S. Teenagers are now turning to artificial intelligence chatbots for emotional support and advice, according to a report published Tuesday by the Pew Research Center.
While the majority of teens – 57% – utilize AI for information searches and 54% for schoolwork assistance, a significant minority are increasingly relying on these tools to fill roles traditionally held by friends, family, or mental health professionals. The Pew study revealed that 16% of teens engage in casual conversation with AI chatbots, with 12% specifically seeking emotional support or advice.
Mental health experts are expressing caution regarding this trend. Dr. Nick Haber, a Stanford professor researching the therapeutic potential of large language models, warned that dependence on chatbots for emotional needs can lead to isolation. “There are a lot of instances where people can engage with these tools and then can become not grounded to the outside world of facts and not grounded in connection to the interpersonal, which can lead to pretty isolating — if not worse — effects,” he told TechCrunch.
The Pew Research Center’s findings also highlight a disconnect between teen AI usage and parental awareness. While 64% of teens report using chatbots, only 51% of parents believe their children do. Parents generally approve of AI use for information gathering (79%) and schoolwork (58%), but approval drops sharply when it comes to casual conversation (28%) and emotional support (18%). A majority – 58% – of parents disapprove of their children using AI for emotional support.
This growing reliance on AI for emotional wellbeing coincides with a separate trend of teens experimenting with unregulated substances. According to a recent report in TIME magazine, experimental “anti-aging” peptides are gaining traction on social media, promoted by influencers and celebrities. These peptides, often purchased through the “gray market” as they lack FDA approval, are marketed for benefits ranging from increased energy to improved libido. Experts caution that the assumption of safety due to their naturally occurring amino acid composition is inaccurate, and that these substances “could potentially be very potent and very toxic.”
In response to concerns about online safety, Instagram announced Thursday that it will begin notifying parents when their teenagers repeatedly search for terms related to suicide or self-harm within a short period. This feature, although, requires enrollment in Instagram’s “supervised accounts” program and is currently part of ongoing legal challenges concerning the platform’s impact on children’s mental health. Meta, Instagram’s parent company, is currently facing two trials over alleged harms caused by its products to young users.