Computer engineers and tech-inclined political scientists have long warned that cheap, powerful artificial intelligence tools would soon make it possible to create fake images or videos believable enough to sway an election. But the technology, dubbed “deepfakes,” still seemed a year or two away. The 2024 White House race is now facing a firehose of tech-enabled disinformation, and many tech experts view it as a double-edged sword that threatens voters and the democratic process.
The fabricated images of Donald Trump’s arrest illustrate one example, as does the GOP’s April release of an ad showing a dystopian future under Joe Biden that used fake but realistic-looking photos of boarded-up storefronts and armored military patrols. It was the first time a presidential campaign released a deepfake image, and it was disclosed in small print that it was an AI-generated image.
AI-generated images are disrupting art, journalism, and now politics. And they may help skew the election by focusing attention and funding on swing voters who are easily misled, according to some tech experts.
Campaigners on both sides of the US political aisle are harnessing the power of AI to target voters, often with spooky precision. With America’s extreme political polarization, only a tiny percentage of voters will likely decide the outcome in November, and campaigns want to reach those few in critical states. AI is helping to whittle down the number of undecided voters by targeting them with hyper-realistic but entirely fake content pieces.
The technology also enables new forms of deception, from social media bots that pretend to be real voters to deceptive robocalls that resemble actual voices and can obscure the message’s origin. The language barriers that once helped distinguish fake from genuine content are fading, too, and telltale signs such as repetition of words or odd word choices are more challenging to identify. And with the threat of foreign interference in US elections on the rise, groups that intend to erode trust in democracy could use these techniques more widely than in past decades.
As the volume of AI-generated material grows, some are urging Congress to act quickly to preserve democracy in an era when it is easy for foreign adversaries to manipulate the US electoral process and undermine its integrity. But others warn that imposing any rules or disclosures would be nearly impossible given how rapidly the technology is evolving.
Some Democratic senators, including Amy Klobuchar, Michael Bennet, and Cory Booker, have introduced legislation requiring labeling AI-generated content in ads. Still, it has yet to gain traction in the GOP-controlled Congress. For now, the FEC is limiting its rules to require that any advertisement with the word “AI” disclose that it is generated by technology. But the pace of change in the field of generative AI is so rapid that even if that were done, it might be too late to avert a flood of deception in November.