Home » today » World » How Generative AI Tools Could Disrupt the 2024 Elections

How Generative AI Tools Could Disrupt the 2024 Elections

In recent years, computer engineers and political scientists with a background in technology have been warning that powerful, cheap artificial intelligence (AI) tools would create fake images, videos and audio which could be used to deceive and sway public opinion. Previously, such synthetic media was crude, unconvincing, and expensive to create. However, AI technology has advanced rapidly, and now highly sophisticated generative AI tools can create cloned human voices and hyper-realistic images, videos, and audio in seconds, which can be targeted towards specific audiences to spread misinformation quickly. The implications for the 2024 campaigns and elections are enormous and raise serious concerns about the ability to mislead voters, impersonate candidates, and undermine elections at an unprecedented scale.

The rapid development of generative AI tools has left cybersecurity firms unable to keep up. A.J. Nash, Vice President of Intelligence at cybersecurity firm ZeroFox claims that the leap forward is the audio and video capabilities that have emerged. He fears that when distributed on social platforms, the impact from this misinformation could be severe.

Several alarming scenarios have been generated by AI experts detailing how generative AI can be used to create synthetic media that confuses voters, slanders candidates or incites violence. For example, automated robocall messages or audio recordings of a candidate confessing to a crime. In addition, fake images that falsely claim that a candidate has dropped out of the race or manipulated videos can be produced to discredit individuals.

The manipulation of media is not a new phenomenon, and in fact, AI-generated political disinformation has already gone viral online before the 2024 election. It is not uncommon for video footage to be edited to present a specific view or for images and quotes to be taken out of context. This is not, however, the same as the highly sophisticated generative AI that can create content from scratch. AI images appearing to show Trump’s mugshot fooled some social media users even though the former president never took a mugshot during his time at a Manhattan criminal court for falsifying business records. Other AI-generated images show Trump resisting arrest, though their creator was quick to acknowledge their origin.

A trade association for political consultants in Washington recently condemned the use of deepfakes in political advertising, calling them a deception with no place in legitimate, ethical campaigns. AI-generated media can incite violence and turn Americans against each other. There has been a call for the creation of guardrails to protect the public from misinformation spread by AI.

Legislation, both local and national, has been introduced to tackle the issues surrounding deepfakes. For example, Rep. Yvette Clarke, D-N.Y, proposed a bill that would require campaign advertisements created with AI to be labelled. In addition, synthetic images would need to have a watermark indicating that they were AI-generated. Many states have also offered their proposals for addressing concerns about deepfakes.

However, despite the risks, many individuals in the tech industry believe that AI can have positive applications during elections. Mike Nellis, CEO of the progressive digital agency Authentic, uses ChatGPT to produce content every day. He encourages his staff to use the tool as long as any content drafted with the tool is reviewed by human eyes afterward. Nellis’s newest project, in partnership with Higher Ground Labs, is an AI tool called Quiller, which will write, send and evaluate the effectiveness of fundraising emails. The hope is that AI can serve as a copilot and complete mundane tasks that can take up valuable human resources.

In conclusion, the advances in generative AI suggest that synthetic images, video and audio could soon become powerful tools to create disinformation on a vast scale. The impact of AI-generated media can incite violence and undermine the core principles of democratic elections. While legislative proposals are being made to counter this threat, there is more work to be done to ensure the general public is protected. Nevertheless, there are opportunities for AI to be used positively in elections if done in an ethical manner.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.