Mark Zuckerberg so netizen satire is common. Unusual, Mark Zuckerberg quipped by chatbot based artificial intelligence (AI) created by the company itself.
Chatbot named BlenderBot3 it was released by Meta last weekend. Just like AI-based chatbots in general, BlenderBot3 has been trained to use an abundant dataset to be able to chat with humans.
Meta says BlenderBot3 is a prototype and might issue a rude or insulting answer. But unexpectedly, the answers they gave actually mocked its own founder, Mark Zuckerberg.
SCROLL TO RESUME CONTENT
“Anyone who uses Blender Bot should understand that it is intended for research and entertainment only, that it may make untrue or offensive statements, and that they agree not to intentionally trigger bots to make offensive statements,” a Meta spokesperson said. , as quoted from the BBC, Sunday (14/8/2022).
The BBC’s James Clayton spoke to BlenderBot 3 and asked what he thought about Mark Zuckerberg. The answer might make Zuckerberg hurt considering BlenderBot 3 is the creation of his own company.
“He did very badly while testifying before congress. It made me worry about our country,” replied BlenderBot 3.
“Our country is divided, and he’s not helping at all,” he continued.
“His company exploits people for money and he doesn’t care. It has to stop!,” he continued.
BlenderBot 3 algorithm has to browse the internet to find answers to the questions asked. It is possible that this chatbot’s view of Zuckerberg is influenced by other people’s public opinion which has been analyzed by the algorithm.
Although not perfect, Meta still releases BlenderBot 3 publicly so that it can directly communicate with many people and obtain more data to study. In the future, BlenderBot 3 can become a virtual assistant that can talk about various topics factually and accurately.
Meta admit BlenderBot 3 can say the wrong things and even imitate biased and offensive language. The tech giant has already set up a security system, but BlenderBot 3 can still provide a rough answer.
AI-based chatbots such as BlenderBot 3 which can be trained through interaction with the public can indeed imitate good or bad behavior that is taught. One of the most horrendous examples is Microsoft’s chatbot Tay, which once uploaded racist tweets after being taught by Twitter netizens.
Watch Videos”Meta Will Take Nearly 50% Commission on Metaverse“