The use of artificial intelligence to generate images, text and voices has the potential for “muddying the waters” in political campaigning and deepening mistrust among voters, according to communications experts.

Generative artificial intelligence, or AI, allows users to input prompts resulting in generated content that can depict just about anything the user desires. With the 2024 elections looming, experts in political communications are bracing for AI-generated images to start showing up much more frequently in campaign ads.

“You talk about opposition messaging, it can be created at the snap of a finger. The prompt returns information so fast that we’ll be inundated with it as the election cycle really starts to heat up,” Janet Coats, managing director of the University of Florida’s Consortium on Trust in Media and Technology, told The News Service of Florida in a recent interview.

The ability to manipulate images and voice audio is “moving to a whole different level of sophistication,” Coats said.

“We’ve been heading down this road for a long time,” Coats said. “One of the first visual ads that could now be labeled as deceptive is 1964, with the very famous [attack ad], the little girl with the daisy and then the mushroom cloud superimposed over her on the screen that the [Lyndon B.] Johnson campaign ran against Barry Goldwater.”

In the past, “when those manipulations were happening, you knew there was a human being who was manipulating the information,” she added.

For the two biggest political figures residing in Florida, who are on a collision course in the 2024 Republican presidential primary, the issue erupted in early June. A Twitter account affiliated with Gov. Ron DeSantis’ presidential campaign tweeted a video that included multiple AI-generated pictures of former president Donald Trump hugging Anthony Fauci, Trump’s former chief medical advisor who spearheaded the administration’s pandemic response.

With Trump and DeSantis battling over their respective approaches to the COVID-19 pandemic, the DeSantis camp sought to depict a cozy relationship between Trump and Fauci.

The post containing the video was slapped with a Twitter “community note,” which the social-media platform says is aimed at letting users “collaboratively add context to potentially misleading Tweets.”

“The 3 still shots showing Trump embracing Fauci are AI generated images. The remainder of the ad’s recordings and images are authentic,” the notice said.

The use of AI-generated images in multiple political ads by New Zealand’s National party made international headlines in May. Also in May came an attack ad on President Joe Biden’s reelection campaign from the Republican National Committee, which depicted a vision of a grim future under a second Biden term using imagery created by AI.

The video is posted to the GOP’s official YouTube channel with a description that explicitly lets viewers know it incorporates AI images.

“An AI-generated look into the country’s possible future if Joe Biden is re-elected in 2024,” the description said.

But not all AI imagery will be identified so readily.

Coats pointed to the ease with which AI-generated images can be created — by just about anyone with an internet connection — and the potential difficulty of pinpointing their source.

“It’s a low barrier to entry, to do it. You don’t have to go contract for some big expensive tool. The tools are readily available. You don’t have to have a particularly specialized knowledge to use them. The more sophisticated the prompt, the higher-quality the output. But it’s not rocket science,” Coats said.

Steve Vancore, a longtime political consultant and pollster, said that generative AI could become commonplace in an era where the volume of political ads and other communications being put in front of voters has steadily risen.

“In the bigger picture, what should be worrisome — the public already has an inherent distrust of political communications. And as a result of that, we’ve seen an escalating arms race in the amount of communications in races,” Vancore told the News Service.

With the volume of political ads going up, the increased use of AI-generated imagery, voices and text is likely to follow.

“There’s so much at stake, the people running these campaigns will only use it to raise more money and to use more of this. And so it’s going to be an unfortunate arms race that’s going to create a higher degree of distrust by the public,” Vancore said.

Vancore, who has been involved in more than 250 campaigns over his decadeslong career, said his advice to candidates about the use of generative AI technology in ads depends on how it would be used.

“My standard for political attack ads, negative ads is: Is it truthful, is it verifiable, and is it relevant,” Vancore said.

Vancore used an example of a candidate using the AI tool ChatGPT, which generates text, to create emails for their constituents.

“To say, ‘Hey, I want a series of emails talking about my program to have after-school counseling for kids.’ … That’s a perfectly acceptable use of artificial intelligence,” Vancore said. “What’s not an acceptable use of artificial intelligence is, ‘Hey, I want you to generate some images of my opponent hanging out with underage girls.’”

Whether AI-generated ads could ding a candidates’ credibility also depends on how they’re used, Vancore said, adding that other uses of the technology might be more subtle.

“One of the raps on Joe Biden is that he’s old. That’s not an unfair rap, perhaps. It’s a legitimate concern that the most powerful person on earth, or one of, maybe is getting older right? What if the Joe Biden campaign subtly just de-aged him a little bit? Showed him walking a little bit more gingerly, responding a little more rapidly,” Vancore said.

Trump, Vancore said, “has the same problem.”

“You can see he [Trump] is aging, it probably has a lot to do with what’s going on in his life. But, if somebody were to de-age him a little bit … Would that even make the press? Would the press even pick up on it, and will that splash back?” he said.

Coats also pointed to the possibility of AI being used to “clean” up candidates’ images.

“There’s the potential for muddying the waters, not just to create attack ads or disinformation about your opponent, but to try to clean yourself up. It’s an octopus. There’s just so many ways that I don’t think we’ve even thought about how it could be deployed,” Coats said.

Candidates on the receiving end of ads that use AI-generated images don’t have to use new methods to combat the attacks, according to Jay Hmielowski, associate professor of public relations at UF.

For example, candidates can use programs designed to detect the use of AI-generated images, said Hmielowski, who specializes in political communications.

“You can use that and say, look, we ran it through this detector, and clearly this shows that this isn’t our candidate saying this. In addition, here’s the actual video of what happened at this event,” he said. “So, you’d do the same things that you’ve always done. Push back against it with, here’s what actually happened, here’s what the facts are relative to this. And then you hope that that gets through to the population of people who are willing to listen to stuff beyond their sort of political bubbles,” he said.

Ryan Dailey reports for the News Service of Florida.

Copyright 2023 News Service of Florida. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.


NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.