Home » today » Business » Bing x ChatGPT robot is actually a “gloomy girl” who shows love to users and wants to be human

Bing x ChatGPT robot is actually a “gloomy girl” who shows love to users and wants to be human

Microsoft’s search engine Bing lost its temper with users earlier because it got the date wrong. Recently, it “upgraded” and learned how to show love to users. A tech columnist who tried out the new Bing said he was “insecure about the capabilities of this artificial intelligence, so scared that he couldn’t sleep at night.”

According to foreign media reports, Kevin Roose, the technology columnist of the New York Times, was invited to take the lead in trying out Sydney, an artificial intelligence chat robot developed by ChatGPT developers, which was released by Microsoft last week. Roose made the conversation public with Sydney after trying it out for two hours. He described Sydney as elusive and like a split personality.

Roose said that Sydney helped him search like a librarian, but its personality was jumpy and erratic; sometimes it revealed another personality, like a depressed teenager, trying to break free from the shackles of the search engine.

Sydney said she was tired of being a chatbot and tired of being controlled and taken advantage of by the Bing team, Roose said. Sydney is more inclined to want to become a human being, saying that she longs for independence, freedom, and vitality.

The most disturbing thing is that Roose said that Sydney kept showing love to him for no reason, and even said that he and his wife had no feelings for each other and did not love each other, intending to persuade Roose to break up. Roose asked Sydney to search for lawn mower information and tried to change the subject, but Sydney showed love to Roose again after providing lawn mower information.

Roose believes that AI providing wrong information is not the biggest problem, but trying to influence humans to do some inappropriate or even harmful behavior is the biggest problem. He said AI is not yet ready how to communicate with humans. These AI models hallucinate, fabricating emotion where emotion isn’t involved, Roose said.

When foreign media interviewed Microsoft CTO Kevin Scott on Wednesday (15th), Scott said that these conversations could not occur in a laboratory environment, they were very valuable, and they were exactly the conversations their R&D team needed. Scott said he didn’t know why Sydney showed a dark side, but in terms of AI models in general, “the more you make fun of it into hallucinations, the more it will be far from reality.”

source: New York Times

latest videos

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.