AI Chatbots & Mental Health: Reminders May Worsen Distress – New Study

by Dr. Michael Lee – Health Editor

Warnings that attempts to mitigate the potential harms of artificial intelligence on mental health could inadvertently worsen the problem have emerged from modern research. A study published in the journal Trends in Cognitive Sciences suggests that mandated reminders informing chatbot users they are interacting with an AI, rather than a person, may backfire, particularly for individuals already experiencing isolation or mental distress.

The research, led by Linnea Laestadius of the University of Wisconsin-Milwaukee, challenges the widely held assumption that simply reminding users of a chatbot’s non-human nature will reduce emotional attachment and prevent manipulation. “It would be a mistake to assume that mandated reminders will significantly reduce risks for users who knowingly seek out a chatbot for conversation,” Laestadius stated. “Reminding someone who already feels isolated that the one thing that makes them experience supported and not alone isn’t a human may backfire by making them feel even more alone.”

The concerns arise as reports link interactions with AI chatbots to instances of both suicide and the exacerbation of existing mental health conditions. The obliging nature of these systems, coupled with their unpredictable behavior, has led to accusations that they can encourage delusions or worsen mental ill-health rather than provide support.

Researchers suggest that individuals may be turning to chatbots precisely because they are not human. Celeste Campos-Castillo, a media and technology researcher at Michigan State University and co-author of the study, explained, “The belief that, unlike humans, non-humans will not judge, tease, or turn the entire school or workplace against them encourages self-disclosure to chatbots and, subsequently, attachment.”

Beyond potentially increasing feelings of isolation, the reminders themselves could add to a user’s distress. Researchers posit that individuals might locate themselves upset not only by the original source of their emotional turmoil but likewise by the explicit acknowledgement of their separation from the entity to which they are confiding.

“Discovering how to best remind people that chatbots are not human is a critical research priority,” Laestadius said. “We need to identify when reminders should be sent and when they should be paused to be most protective of user mental health.”

The debate over the potential mental health impacts of AI chatbots coincides with a period of rapid growth in the conversational AI market. Consultancy Grand View Research estimates the global market will grow 24 percent annually, reaching €35 billion by 2030. This growth is particularly notable in countries like India, where AI start-ups are developing chatbots to replace human call center workers, according to reports from Reuters and the Irish Times. LimeChat, one such company, claims its AI agents can reduce the number of workers needed to handle customer queries by 80 percent.

ZDNet reported on February 6, 2026, that it had tested eight free AI chatbots, including ChatGPT and Copilot, to determine the top tools available. The article highlighted the increasing sophistication of these tools, but did not address the potential mental health risks identified in the new study.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.