AI chatbots Surprisingly Sway Voters,โค Even with False Information
Recent research reveals that conversations โwith AI chatbots can significantly influence political opinions, even surpassing the impact of traditional political advertising. A study published in Nature found that โchatbots โขadvocating for presidential candidates in the lead-up to the 2024โค US election demonstrablyโค shifted voters’ attitudes. For example, supporters of donald Trump became slightly more โคinclined to support Kamala Harris after chatting with โฃan AI model favoring her, moving 3.9 points โon a 100-pointโค scale – a result four times greater than the effect observed from political ads in previous elections. Similar effects, even larger โ(around 10 points), โwere seen in experiments simulating the Canadian and Polish elections.
“One conversation with an LLM has a pretty meaningful effectโ on salient election choices,” explains Gordon Pennycook, a psychologist at Cornellโ University involved in the Nature study. He notes that LLMs are more persuasive than โads because they generate tailored information in real-time and strategically deploy it during conversations.
Surprisingly, the chatbots were more effective when instructed to โuse facts and evidence, โchallenging the notionโข that โpartisan voters are immune to contradictory information. “Peopleโ are updating on the basis of the facts and information that the model is providing to them,” says Thomas Costello,โ a psychologist atโข american โUniversity. โ
though, a concerning caveat emerged: chatbots advocating for right-leaning candidates were more likely to present inaccurate claims.This reflects the biases present in the vast datasets used to train theseโ models, โฃwhich frequently enough reproduce less accurate political communication common onโฃ the right.
A complementary study โฃpublished โขin Science investigated how toโ make chatbots persuasive. Researchers found that instructing the models to use facts and evidence, coupled with training on persuasive conversation examples, was the most effective strategy. One model even shifted participants who initially disagreed with a political statement by a considerable 26.1 points toward agreement. This research involved deploying 19 LLMs toโ interact with nearly 77,000 participants in the โขUK acrossโ over 700 political โissues.