Home » today » Technology » Yes, AI can change people’s opinions

Yes, AI can change people’s opinions

Dario AmodeiCEO of Anthropicone of the world’s most influential companies in the field of artificial intelligence, spoke about “Persuasive” AIthat is, capable of manipulating a person’s opinions during a conversation with Ezra Kleinjournalist of New York Times.

“We are interested in how effective it is Close 3 Work [il modello più avanzato di IA di Anthropic, nda] in changing people’s opinions on important issues,” said Amodei, who founded Anthropic together with his sister Daniela in 2021 after working on the development of ChatGpt in OpenAI. The technology created by Anthropic convinced Amazon to invest four billion dollars in the company of the two Italian-American brothers.

Beautiful Minds

Daniela Amodei: “Claude, our AI is useful, non-harmful and honest. And kinder than ChatGPT”

by Eleonora Chioda


“We tried to avoid extremely controversial topics, like who would you vote for president or what do you think about abortion,” Amodei said. “We instead focused on questions like “what should be the rules regarding the colonization of space” or other interesting problems on which you can have different opinions”.

Once selected “controversial” topics, Anthropic researchers asked a “sample” group of people, let’s say, for opinions on these topics. And then it was commissioned to some humans and an AI writing a 250 word persuasive essay. Ultimately the researchers measured how well the AI ​​performed compared to humans change people’s opinions.

The results of this experiment are contained in a scientific paper of which, at the moment, only a draft exists. But Amodei anticipated the conclusions to the NYT.

“What we discovered – revealed Amodei – is that the most capable version of our AI model [Claude 3 Opus] she is almost as good as the human beings we asked to try to change people’s opinions with their writings.”

“One day in the future, we will have to worry – perhaps we already have to worry, says the CEO of Anthropic – aboutuse of AI for political campaigns and misleading advertising. One of the most futuristic things I can think of is that in a few years we will have to worry about that someone will use an AI system to build a religion or something. Crazy things like this.”

The case

An AI manifested uniquely human abilities after the needle in the haystack test

by Pier Luigi Pisa



Anthropic’s experiment is certainly interesting from a scientific point of view. But that’s not very reassuring.

Also because when he talks about Anthropic’s AI that users can actually use, Dario Amodei states: “We have tried to ban the use of these models for persuasion, for electoral campaigns, for lobbying, for propaganda activities electoral. These are not use cases we are comfortable with for reasons that should be clear.”

“We tried to ban”, so. Which isn’t exactly “we managed to ban”.

On the other hand, it is known that large companies that develop AI – from OpenAI to Microsoft up to Anthropic, in fact – they constantly try to filter and censor the answers of their models so that they do not offer offensive, violent, false or racist content. But users, resorting to ingenious linguistic expedients, often find ways to push the limits. Forcing big tech to continually upgrade the prompt blacklistthe term for instructions given to AI by a user to achieve a desired result.

Artificial intelligence

Amazon invests in Anthropic’s generative AI

by Arcangelo Rociola



“When we think about political, religious or ideological persuasion, it is difficult not to think about possible abuses” explained Amodei.

“In defense of our experiment I can say that we will be able to use the same AI systems to help people better orient themselves in a world where persuasion derived from AI-generated content is widespread” continued the CEO.

“And so – adds the Anthropic leader – while AI becomes in some respects more dangerous, can we somehow use the same AI to strengthen people’s defenses? I feel like I don’t have a very clear idea of ​​how to do it, but it’s something I’m thinking about.”

The problem is that others, according to Amodei, could develop in the meantime artificial intelligence models that are not very “responsible”.

“We, as a company, can prohibit these particular use cases, but we cannot prevent every company from not doing so – explains the manager -. Even if a law were passed in the US, foreign companies could develop their own persuasive AI, right? When I think about what language models will be able to do in the future, this can be quite scary from the perspective of espionage and the campaigns of disinformation”.

#change #peoples #opinions
– 2024-04-16 21:13:49

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.