AI chatbots Surprisingly Sway Voters, Even with False Information
Recent research reveals that conversations with AI chatbots can significantly influence political opinions, even surpassing the impact of traditional political advertising. A study published in Nature found that chatbots advocating for presidential candidates in the lead-up to the 2024 US election demonstrably shifted voters’ attitudes. For example, supporters of donald Trump became slightly more inclined to support Kamala Harris after chatting with an AI model favoring her, moving 3.9 points on a 100-point scale – a result four times greater than the effect observed from political ads in previous elections. Similar effects, even larger (around 10 points), were seen in experiments simulating the Canadian and Polish elections.
“One conversation with an LLM has a pretty meaningful effect on salient election choices,” explains Gordon Pennycook, a psychologist at Cornell University involved in the Nature study. He notes that LLMs are more persuasive than ads because they generate tailored information in real-time and strategically deploy it during conversations.
Surprisingly, the chatbots were more effective when instructed to use facts and evidence, challenging the notion that partisan voters are immune to contradictory information. “People are updating on the basis of the facts and information that the model is providing to them,” says Thomas Costello, a psychologist at american University.
though, a concerning caveat emerged: chatbots advocating for right-leaning candidates were more likely to present inaccurate claims.This reflects the biases present in the vast datasets used to train these models, which frequently enough reproduce less accurate political communication common on the right.
A complementary study published in Science investigated how to make chatbots persuasive. Researchers found that instructing the models to use facts and evidence, coupled with training on persuasive conversation examples, was the most effective strategy. One model even shifted participants who initially disagreed with a political statement by a considerable 26.1 points toward agreement. This research involved deploying 19 LLMs to interact with nearly 77,000 participants in the UK across over 700 political issues.