Home » today » Business » Google AI Missteps: Dark-Skinned Nazis Invented by Gemini Program Spark Concern Over Company’s Control and Influence

Google AI Missteps: Dark-Skinned Nazis Invented by Gemini Program Spark Concern Over Company’s Control and Influence

A model based on artificial intelligence invents dark-skinned Nazis…a mistake made by the “Gemini” program developed by Google that would have gone unnoticed after it was quickly corrected, but it highlights the excessive influence that a handful of companies enjoy over this fundamental technology in a way growing.

Google Group CEO Sundar Pichai last month spoke of “completely unacceptable” errors on the part of his company’s Gemini AI app, after missteps such as images of racially diverse Nazi troops forced him to temporarily block users from creating images of people.

Social media users mocked Google and criticized it for historically inaccurate images, such as those showing a black American woman being elected to the Senate in the nineteenth century, when this did not happen in reality before 1992.

“We definitely had errors in the image generation process,” Google co-founder Sergey Brin said at a recent artificial intelligence hackathon, adding that the company should have tested the Gemini program more comprehensively.

People interviewed at the famous South by Southwest arts and technology festival in Austin said that Gemini’s stumble highlights the excessive power that a handful of companies have over artificial intelligence platforms that are moving to change the way people live and work.

Lawyer and technology entrepreneur Joshua Weaver believed that Google initially adopted a path in its software that “excessively” embraced the principles of the “WALK” movement (against all forms of discrimination and inequality), which means that it went too far in its efforts to highlight inclusion and diversity.

Charlie Burgoyne, CEO of the Valkyrie Applied Science Laboratory in Texas, said that Google quickly corrected its errors, but the fundamental problem remains. He likened Google’s fixing of the “Gemini” bug to putting a bandage on a bullet wound.

Weaver pointed out that while Google used to have the luxury of having enough time to improve its products, it is now rushing into the artificial intelligence race with Microsoft, Open AI, Anthropic and others, adding, “They are moving at a speed that exceeds the limits of their knowledge.” “.

Mistakes committed in the context of efforts to take into account cultural sensitivity constitute explosive points, especially in light of the tense political divisions in the United States, a situation that has been exacerbated by the X platform owned by billionaire Elon Musk.

“People on Twitter (currently X) do not hesitate to celebrate any embarrassing event in the field of technology,” Weaver said, adding that the reaction to the gaffe that occurred through the Gemini program was “exaggerated.”

However, Weaver emphasized that this incident raised questions about the degree to which those who use AI tools control the information.

He noted that in the next decade, the amount of information – both true and false – generated by AI could exceed that generated by humans, meaning that those who control AI safeguards will have a major impact on the world.

– Internal and external bias –

Karen Palmer, an award-winning mixed reality creator at Interactive Films Ltd., said she envisions a future in which someone rides in a robotaxi, and “if AI scans your face and thinks there are any outstanding violations against you.” …will take you to the local police station,” not the intended destination.

AI is trained using vast amounts of data and can be used for an increasing range of tasks, from creating images or sound to determining whether someone is eligible for a loan, or whether a medical test detects cancer.

But this data comes from a world full of cultural bias, misinformation, and social inequality, as well as online content that can include casual conversations between friends or intentionally provocative and exaggerated posts, and AI models can replicate these flaws.

With Gemini, Google engineers attempted to rebalance algorithms to provide results that better reflect human diversity.

But these efforts backfired.

“It can be really difficult to know where the bias is and how it was embedded,” said technology lawyer Alex Shahristani, a managing partner at Promise Legal, a law firm that works with technology companies.

He and others believe that even well-intentioned engineers involved in AI training can’t help but bring their own life experiences and unconscious bias into the process.

Valkyrie Lab’s Burgoyne also criticized big tech companies for keeping their inner workings in generative AI hidden in “black boxes,” so users can’t detect any hidden biases.

“The capabilities of the deliverables have far exceeded our understanding of the methodology,” he said.

Experts and activists are calling for more diversity in the teams that create AI and related tools, and greater transparency about how they work – especially when algorithms rewrite user requests to “improve” results.

2024-03-17 13:40:00
#Missteps #Googles #artificial #intelligence #program #highlight #excessive #influence #tech #giants

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.