WASHINGTON (AP) — Computer engineers and tech-savvy political scientists have been warning for years that anyone with cheap, powerful artificial intelligence tools will soon be able to create fake images, video and audio that are realistic enough to fool voters and perhaps influence an election.
The synthetic images that emerged were often coarse, not convincing and costly to produce, especially when other kinds of disinformation were so cheap and easy to spread on social media. The threat posed by AI and so-called deep fakes always seemed a year or two away.
Advanced generative AI tools can now create cloned human voices And hyper-realistic images, videos and audio in seconds, at a minimal cost. When tied to powerful social media algorithms, this fake and digitally created content can spread quickly and far and target very specific audiences, potentially taking dirty campaign tricks to a new low.
The implications for the 2024 campaigns and elections are as great as they are troubling: generative AI can not only rapidly produce targeted campaign emails, texts or videos, but can also be used to mislead votersmasquerading as candidates and undermining elections on a scale and speed never seen before.
“We are not prepared for this,” warned AJ Nash, vice president of intelligence at the cybersecurity firm ZeroFox. “For me, the big leap forward is the audio and video capabilities that have emerged. If you can do that on a large scale and spread it on social platforms, it will have a big impact.”
AI experts may quickly declare some alarming scenarios in which generative AI is used to create synthetic media for the purpose of confusing voters, defaming a candidate or even inciting violence.
Here are a few: Automated robocall messages, in a candidate’s voice, instructing voters to vote on the wrong date; audio recordings of a candidate allegedly confessing to a crime or expressing racist views; video footage of someone giving a speech or interview they never gave. Fake images designed to look like local news reports, falsely claiming that a candidate has dropped out of the race.
“What if Elon Musk calls you personally and tells you to vote for a certain candidate?” said Oren Etzioni, the founder and CEO of the Allen Institute for AI, who stepped down last year to start the nonprofit AI2. “A lot of people would listen. But he isn’t.”
Former President Donald Trump, who is running in 2024, has been sharing AI-generated content with his social media followers. A manipulated video from CNN host Anderson Cooper shared by Trump on his Truth Social platform on Friday, detailing Cooper’s response to the CNN town hall this past week with Trumpwas created using an AI voice cloning tool.
A dystopian campaign ad released last month by the Republican National Committee offers another glimpse into this digitally manipulated future. The online ad that came after President Joe Biden announced his re-election campaignand opens with a strange, slightly twisted image of Biden and the lyric “What if the weakest president we’ve ever had was reelected?”
A series of AI-generated images follows: Taiwan under attack; storefronts boarded up in the United States as the economy crumbles; soldiers and armored military vehicles patrol local streets as tattooed criminals and waves of immigrants spread panic.
“An AI-generated look at the country’s possible future if Joe Biden is re-elected in 2024,” reads the RNC’s ad description.
The RNC acknowledged the use of AI, but others, including nefarious political campaigns and foreign adversaries, will not, said Petko Stoyanov, global chief technology officer at Forcepoint, a cybersecurity firm based in Austin, Texas. Stoyanov predicted that groups seeking to interfere in American democracy will use AI and synthetic media as a way to erode trust.
“What happens when an international entity – a cybercriminal or a nation state – imitates someone. What is the impact? Do we have a story?” said Stoyanov. “We are going to see a lot more disinformation from international sources.”
AI-generated political disinformation has already gone viral online ahead of the 2024 election, from a edited video of Biden appears to be giving a speech attacking transgender people AI-generated images of children supposedly learning satanism in libraries.
AI graphics seems to show Trump’s mug shot also fooled some social media users, though the former president took none when he was booked and charged in a criminal court in Manhattan for falsifying business records. Other AI generated images showed Trump resists arrestthough their maker was quick to acknowledge their origin.
Legislation requiring candidates to label campaign ads created with AI was introduced in the House by Rep. Yvette Clarke, DN.Y., who also sponsored legislation requiring anyone who creates synthetic images to add a watermark indicating the fact indicates.
Some states have offered their own proposals for addressing deepfake concerns.
Clarke said her biggest fear is that generative AI could be used before the 2024 election to create a video or audio that incites violence and pits Americans against each other.
“It’s important that we keep up with the technology,” Clarke told The Associated Press. “We need to put up some guardrails. People can be tricked, and it only takes a fraction of a second. People are busy with their lives and don’t have the time to check all the information. AI gets weaponized, in a political season it can be extremely disruptive.
Earlier this month, a trade association for political consultants in Washington condemned the use of deepfakes in political advertisements, she called “a deception” that “has no place in legitimate, ethical campaigns”.
Other forms of artificial intelligence have been a feature of political campaigns for years, using data and algorithms to automate tasks such as targeting voters on social media or tracking down donors. Campaign strategists and tech entrepreneurs hope that the latest innovations will continue to have positive effects in 2024.
Mike Nellis, CEO of forward-thinking digital agency Authentic, said he uses ChatGPT “every day” and encourages its staff to use it as well, as long as the content created with the tool is subject to human review afterwards.
Nellis’ latest project, in collaboration with Higher Ground Labs, is an AI tool called Quiller. It will write, send and evaluate the effectiveness of fundraising emails – all typically tedious tasks in campaigns.
“The idea is that every Democratic strategist, every Democratic candidate is going to have a copilot in his pocket,” he said.
Swenson reported from New York.
The Associated Press receives support from several private foundations to improve its explanatory coverage of elections and democracy. Read more about AP’s democracy initiative here. The AP is solely responsible for all content.
Follow up on the AP’s reporting on misinformation https://apnews.com/hub/misinformation and artificial intelligence coverage https://apnews.com/hub/artificial-intelligence