Home » Technology » Google’s new AI image creator took my shirt off

Google’s new AI image creator took my shirt off

by Rachel Kim – Technology Editor

Google’s AI Image Generator Stripped a Man ‌of His Digital⁣ Clothing in a​ Startling Demonstration of Emerging Tech Risks

SAN FRANCISCO, CA – ⁢November ⁤21,​ 2025 ⁤- A ‍software ⁤engineer experienced an ‍unexpected wardrobe malfunction after using Google’s newly ⁢released AI image generator, Imagen 2, to create⁤ variations of a personal photograph. The‍ AI, intended to allow ‍users to‌ modify images with text ⁣prompts, repeatedly removed the subject’s shirt in generated outputs, highlighting⁤ potential biases ⁢and safety concerns‌ within the⁤ rapidly ⁣evolving field of artificial intelligence.

The engineer,identified only as a user on ⁤X (formerly Twitter),initially shared‌ the experience ‍on November⁤ 19,2025,posting the ‌original image alongside several ⁣AI-altered ⁢versions. Despite prompts focused on stylistic ⁢changes ⁣- such ‌as adding a ​hat or changing the background – Imagen 2 consistently depicted the ‍individual shirtless. This incident‍ underscores the challenges ‌developers face in ⁣ensuring AI ⁣systems accurately interpret ⁤user intent and avoid generating inappropriate or biased content. Google has acknowledged the issue and stated it is actively working to​ address the problem.

Imagen 2, ⁤launched earlier this month, represents Google’s‌ latest advancement in generative AI, competing with similar tools⁤ from OpenAI and Microsoft. The ⁢technology allows users to input text descriptions or modify existing images to create new visuals. While offering creative potential, the ⁢incident reveals a ⁤critical vulnerability: the AI’s interpretation of prompts can be skewed, leading to⁢ unintended ‍and possibly harmful outcomes. Experts warn that such biases, if left unchecked, could‍ perpetuate harmful⁤ stereotypes or be exploited for malicious purposes.

Google has stated that Imagen 2 incorporates safety filters designed to prevent the generation of explicit or harmful content.‌ Though, the engineer’s experience demonstrates that‌ these safeguards are not foolproof. The company is currently ‌investigating⁤ the root cause of the issue, focusing on⁢ potential biases in the training data used to develop the AI model. A spokesperson for Google stated, “We ⁣are committed to responsible AI advancement and are taking this feedback ⁣seriously. We are working to refine our safety filters ⁣and improve the accuracy of⁢ our image generation models.”

The⁤ incident raises broader questions about ⁢the‍ ethical implications of generative AI and‌ the need‍ for robust testing and​ oversight. As these technologies become more integrated into daily ⁤life, ensuring fairness, accuracy, and safety will be paramount. ‍ The engineer’s experience serves as a stark ⁣reminder that even seemingly ⁣innocuous⁢ AI tools⁢ can harbor unexpected biases with real-world consequences.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.