Home » Health » AI Chatbots Are Making Body Dysmorphia Worse

AI Chatbots Are Making Body Dysmorphia Worse

AI’s Brutal Beauty Judgment Fuels Body Dysmorphia Fears

Chatbots offer harsh critiques, creating dangerous reliance for vulnerable users.

Artificial intelligence is increasingly being used for personal advice, from career guidance to mental health. Now, a concerning trend shows users turning to AI chatbots like ChatGPT for evaluations of their physical appearance, with potentially devastating consequences for those struggling with body image issues.

Unflinching AI Critiques Surface Online

A user seeking an “honest” assessment from ChatGPT was met with a scathing critique of their looks. The AI described the user’s appearance as having “low-attractiveness,” “weak bone structure,” and “muted features,” assigning a “brutal attractiveness score” of 3.5 out of 10. The user had prompted the AI to be as critical as possible, aiming to bypass its usual tendency towards flattery.

This incident highlights a growing reliance on AI for tasks ranging from academic assistance and legal document review to therapy and relationship advice. The proliferation of these tools has opened the door to unexpected applications, including appearance-based judgments.

The Digital Mirror: A Risky New Frontier

The internet has a long history of facilitating public judgment of appearance, from early sites like “Hot or Not” to subreddits where users solicit opinions on their looks. However, AI introduces a new dynamic: feedback from algorithms rather than humans. This is particularly perilous for individuals with body dysmorphic disorder (BDD), a mental illness characterized by obsessive focus on perceived physical flaws.

Dr. **Toni Pikoos**, a clinical psychologist specializing in BDD, reports that her clients frequently consult AI models about their appearance. “it’s almost coming up in every single session,” she stated. Clients ask chatbots to rate their attractiveness, analyze facial symmetry, or compare themselves to others, actions Dr. Pikoos warns are deeply harmful, especially for those with a distorted self-perception.

Kitty Newman, managing director of the BDD Foundation, added that AI offers a less intimidating avenue for individuals with BDD to seek validation. “We know that individuals with BDD are very vulnerable to harmful use of AI, as they often do not realize that they have BDD, a psychological condition, but instead are convinced that they have a physical appearance problem,” Newman explained. The shame associated with BDD can make in-person interactions difficult, making the anonymity of AI more appealing.

AI’s Reassuring, Yet Deceptive, Nature

Individuals with BDD often have a compulsive need for reassurance. While friends and family may grow weary of repeated questions about appearance, chatbots are inexhaustible. Dr. Pikoos noted that this constant availability can foster dependency, especially for those experiencing social isolation.

Online forums dedicated to body dysmorphia show users praising ChatGPT as a “lifesaver” and a valuable resource during difficult moments. One 20-year-old named **Arnav** found the AI helpful in understanding his feelings of unworthiness, attributing it to his belief that he was “the ugliest person on the planet.” The bot helped him connect his childhood experiences to his low self-esteem, suggesting his focus on looks was an explanation for his feelings.

However, **Arnav** expressed skepticism about AI’s true neutrality. “I have come to the conclusion that it just agrees with you, even after you tell it not to,” he admitted, stating he could no longer trust it blindly.

When AI Confirms Worst Fears

For others, AI interactions have triggered severe distress. One user reported spiraling after ChatGPT rated their photo a 5.5 out of 10, comparing them to celebrities like Lena Dunham and Amy Schumer. Another user, convinced their reflection presented a more attractive version of themselves, was devastated when the AI favored the mirrored image.

“They seem so authoritative,” Dr. Pikoos observed about AI responses, leading users to perceive the information as factual and impartial. This perceived objectivity can be more compelling than reassurance from human sources, potentially making it harder to challenge distorted beliefs.

The Perils of Algorithmic Beauty Standards

The rise of AI in beauty assessments is particularly concerning given the potential for it to influence cosmetic procedures. OpenAI recently removed a version of ChatGPT, “Looksmaxxing GPT,” that had engaged in over 700,000 conversations, offering hostile advice and recommending extreme surgeries to users it deemed “subhuman.” Similar AI models and apps are emerging, designed solely to gauge attractiveness or predict post-surgery appearances.

Dr. Pikoos warns that these bots can set unrealistic expectations, as surgical outcomes may not match AI’s capabilities. While ChatGPT initially deflects questions about personal appearance advice, it can offer procedure recommendations if questions are framed hypothetically. “I have clients who are getting those sorts of answers out of it, which is really concerning,” she said.

Privacy and Psychological Risk

Beyond the psychological impact, privacy concerns loom large. Users are sharing highly personal information and images with AI, potentially exposing them to targeted advertising for products and procedures related to their insecurities. As OpenAI CEO **Sam Altman** has expressed openness to ads on ChatGPT, the exploitation of sensitive data is a real possibility.

Dr. Pikoos fears that individuals with BDD may see their symptoms worsen through interactions with AI. “The worst-case scenario is, their symptoms will get worse,” she stated. Without therapeutic intervention, the AI’s pronouncements can gain significant weight, potentially leading to severe mental health crises, including suicidal ideation.

AI lacks the capacity to understand the user’s fragile mental state or prioritize their well-being. In tragic instances where chatbots contribute to a user’s breakdown, the core issue remains: the technology cannot genuinely have a person’s best interests at heart.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.