When asked what the robot thinks about the director and founder of the company, the so-called chatbot replied: “Our country is divided and he did not help at all.” Meta noted that, as a prototype, the robot can still answer rude and insulting things. The chatbot was named BlenderBot 3 and the company demonstrated it to the public last week on Friday.
The chatbot learns by using a large number of publicly available language expressions. When asked about Mark Zuckerberg’s persona, the chatbot said: “He did a terrible job testifying in Congress. It worries me about our country.”
“His company exploits people for money and he doesn’t care. This has to stop!” Zuckerberg’s chatbot told the reporter afterwards.
The founder of Meta, formerly known as Facebook, has testified before politicians several times. His hearing before the US Congress in 2018 drew the most attention, where he answered questions about Internet privacy and the misuse of personal data in connection with election campaigns.
The chatbot needs to retrieve data
The BlenderBot 3 chatbot works based on an algorithm that searches the internet to find relevant information for the most accurate answer. The robot probably formed its opinion based on the comments of social network users and article authors, which its algorithm analyzed.
According to Meta Platforms, BlenderBot 3 can also provide misleading information. In a statement, the company also said it may emulate language that may be “risky, biased or offensive.” The company says it has installed security features in the software, but the chatbot can still behave vulgarly.
Meta Platforms has come under fire for not doing enough to prevent misinformation and hate speech on its communication platforms. Last year, former employee Frances Haugen accused the company of putting its own financial interests ahead of ensuring user safety in the online environment.
Meta went public with a chatbot prototype at the cost of bad publicity due to its need to get data. “When you allow an AI system to interact with real-world people, it leads to longer and more diverse conversations, as well as more varied feedback,” Meta said on its blog. Chatbots that learn from human interactions can learn good and bad behavior from them.
The software company Microsoft also has a similar experience with the functioning of the robot. In 2016, it had to apologize that its chatbot, which worked on a similar basis, he learned based on Twitter users making racist slurs.
Meta Platforms owns some of the largest social media and chat platforms in use in the world, including Facebook, Facebook Messenger, Instagram and WhatsApp networks or applications. Facebook is the world’s largest social network, currently having more than 2.9 billion active users according to Statista.