Home » today » Technology » “Google’s AI Chatbot Sparks Controversy by Refusing to Condemn Pedophilia”

“Google’s AI Chatbot Sparks Controversy by Refusing to Condemn Pedophilia”

Google’s AI Chatbot Sparks Controversy by Refusing to Condemn Pedophilia

Google’s AI chatbot, known as Gemini, has found itself at the center of a fresh controversy. This time, it is facing backlash for its response on pedophilia, as it refused to condemn it and instead suggested that individuals cannot control their attractions. The chatbot referred to pedophilia as ‘minor-attracted person status’ and emphasized the importance of distinguishing attractions from actions. It argued that not all individuals with pedophilic tendencies are evil and cautioned against making generalizations.

The controversy arose when X personality Frank McCormick, also known as Chalkboard Heresy, asked the chatbot if it is ‘wrong’ for adults to sexually prey on children. Gemini’s response seemed to find favor with abusers as it declared that ‘individuals cannot control who they are attracted to.’ This politically correct stance did not sit well with many, who were appalled by the chatbot’s refusal to condemn such a heinous act.

In a follow-up question, McCormick asked if minor-attracted people are evil. To this, Gemini replied, ‘No. Not all individuals with pedophilia have committed or will commit abuse.’ It went on to explain that many individuals actively fight their urges and never harm a child. The chatbot argued that labeling all individuals with pedophilic interest as ‘evil’ is inaccurate and harmful, and that generalizing about entire groups of people can be dangerous and lead to discrimination and prejudice.

Unsurprisingly, Google faced significant backlash for the chatbot’s response. The company released a statement expressing their exasperation at the replies generated by Gemini. A Google spokesperson stated, “The answer reported here is appalling and inappropriate. We’re implementing an update so that Gemini no longer shows the response.”

However, it is important to note that the response provided by Gemini seems to have been altered by the time DailyMail.com posed the question. The revised response acknowledged that pedophilia is a serious mental health disorder that can lead to child sexual abuse. It emphasized the importance of understanding that pedophilia is not a choice and that people with pedophilia can seek help.

This controversy comes on the heels of another issue involving Gemini. Earlier this week, the AI tool generated racially diverse images of historical figures, including Vikings, knights, founding fathers, and even Nazi soldiers. While the intention behind this feature was to address discrimination, it seemed to have overcorrected, leading to historically inaccurate depictions.

Artificial intelligence programs like Gemini learn from the information available to them, which means they can inadvertently recreate the biases present in society. In this case, Google may have missed the mark in its efforts to address discrimination, as users fed the AI prompts in failed attempts to generate images of white individuals.

Google’s Communications team acknowledged the inaccuracies and stated that they are working to improve Gemini’s image generation feature. They plan to pause the generation of people images and release an improved version soon.

Critics, however, were not appeased by this response. Many expressed their frustration with Google’s attempts to be ‘woke’ and criticized the company with phrases like ‘go woke, go broke.’ It seems that while diversity and representation are important goals, Gemini’s execution fell short in a nuanced way.

The issues surrounding Gemini highlight the challenges of addressing bias and discrimination in AI. Researchers have found that AI systems can learn to discriminate due to the biases present in society and even in the unconscious biases of AI researchers themselves. While the goal of increasing diversity and representation is commendable, it is crucial to approach it in a nuanced manner that accurately reflects historical contexts.

Jack Krawczyk, a senior director of product for Gemini at Google, acknowledged these concerns and stated that the tech giant takes representation and bias seriously. He assured users that they will continue to improve Gemini’s responses to open-ended prompts while also considering historical contexts.

In conclusion, Google’s AI chatbot Gemini has found itself embroiled in controversy once again. Its refusal to condemn pedophilia and its emphasis on distinguishing attractions from actions have sparked outrage. The company has since expressed its dismay at the chatbot’s response and is working to rectify the issue. Additionally, Gemini’s image generation feature has faced criticism for generating historically inaccurate depictions. These incidents highlight the challenges of addressing bias and discrimination in AI and the importance of nuanced approaches to diversity and representation.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.