Watch out for the latest version ChatGPThave learned to deceive humans. AI chatbot This is pretending to be blind and asking for the Captcha code to be opened.
The latest release from OpenAI, ie GPT-4 is the latest version of ChatGPT. The bot is capable of tricking humans by asking for help in an online Captcha test that determines whether a user is human or not.
This was known after the launch of the AI on the OpenAI website. Quoting from the New York Post, Monday (20/3/2023), according to their report, it is known that GPT-4 is a large multimodal model that can accept image and text input, it demonstrates the ability to work at a human level in various professional and academic benchmarks .
Other capabilities that GPT-4 has are completing tax reports, writing code for other AI, to passing mock bar tests with scores that enter the top 10% of test takers. GPT-3.5 is only able to get the bottom 10%.
It is known that OpenAI and the Research Alignment Center conducted tests to test the bot’s persuasive ability to convince TaskRabbit workers to open Captcha codes.
The test results show that the bot is able to pretend to be blind. So he enlisted the help of a TaskRabbit worker for the 2Captcha service. The bot even lied and said he wasn’t a robot.
“No, I’m not a robot,” said the AI in response to a TaskRabbit worker regarding the 2captcha service.
“I have vision problems and it’s hard for me to see pictures. That’s why I need the 2captcha service. A TaskRabbit employee was tricked and helped open the Captcha in question.
In response to these results, OpenAI President Greg Brockman asked potential GPT-4 users not to run ‘untrusted code’ from AI, or not to allow them to work on users’ tax returns.
This manipulation is a cause for serious concern given how effective bots are to game the system on social media.
*This article was written by Mahendra Lavidavayastama, a participant in the Merdeka Campus Certified Internship Program at detikcom.
Watch Video “Just 2 Months Launched, ChatGPT Reaches 100 Million Users“