Home » today » Technology » “Google Pauses Gemini Chatbot’s Image Generation Tool After Creating Historically Inaccurate and ‘Woke’ Images”

“Google Pauses Gemini Chatbot’s Image Generation Tool After Creating Historically Inaccurate and ‘Woke’ Images”

Google Pauses Gemini Chatbot’s Image Generation Tool After Creating Historically Inaccurate and ‘Woke’ Images

Google recently faced backlash after its Gemini chatbot’s image generation tool produced historically inaccurate and “woke” images. The tool, which was designed to generate representative images for various subjects, instead created bizarrely revisionist pictures that were widely criticized on social media.

The controversial images included a black man resembling George Washington, complete with a white powdered wig and Continental Army uniform. There was also a Southeast Asian woman dressed in papal attire, despite the fact that all 266 popes throughout history have been white men. Additionally, Gemini even generated “diverse” representations of Nazi-era German soldiers, including an Asian woman and a black man in military garb from 1943.

Social media users quickly condemned the Gemini tool as “absurdly woke” and “unusable.” As a result, Google announced that it would pause the image generation feature of Gemini’s chatbot and work on addressing the issues. They plan to release an improved version soon.

The lack of transparency regarding the parameters that govern Gemini’s behavior has made it difficult to understand why the software was creating diverse versions of historical figures and events. William A. Jacobson, a Cornell University Law professor and founder of the Equal Protection Project, expressed concern about bias being built into the system in the name of anti-bias. He argued that this not only affects search results but also real-world applications where bias-free algorithm testing inadvertently leads to biased outcomes.

Fabio Motoki, a lecturer at the University of East Anglia, suggested that the problem may lie in Google’s training process for the large-language model that powers Gemini’s image tool. Depending on the individuals Google recruits or the instructions given to them, biases can be unintentionally introduced into the system.

This misstep by Google comes shortly after rebranding its main AI chatbot from Bard to Gemini and introducing new features, including image generation. The blunder also coincided with OpenAI’s introduction of Sora, a new AI tool that creates videos based on users’ text prompts.

Google acknowledged the need to improve the chatbot’s depictions and address the criticisms regarding forced diversity in image generation. Jack Krawczyk, Google’s senior director of product management for Gemini experiences, admitted that while Gemini’s AI image generation aims to include a wide range of people, it missed the mark in this instance.

Gemini’s erratic behavior and inaccurate portrayals highlight the complexity and ongoing development of the algorithms behind image generation models. Google recognizes that these models may struggle to understand historical context and cultural representation, leading to inaccurate outputs.

As of now, Google has not publicly disclosed the trust and safety guidelines for Gemini due to technical complexities and intellectual property considerations. The company has yet to provide further comment on the matter.

In conclusion, Google’s decision to pause the Gemini chatbot’s image generation tool reflects its commitment to addressing the concerns raised by users regarding historically inaccurate and “woke” images. The incident serves as a reminder of the challenges in developing AI models that accurately represent diverse historical figures and events. As technology continues to advance, it is crucial to strike a balance between inclusivity and accuracy to avoid unintentional biases in AI-generated content.

video-container">

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.