As the 2024 elections approach, a new threat to democracy has emerged: fake generative AI. The explosive success of text generator ChatGPT spurred a global artificial intelligence revolution, but it also triggered warnings that such tools could flood the internet with disinformation and sway voters. Now, the company behind ChatGPT and image generator DALL-E 3 says it will introduce tools to combat disinformation ahead of the dozens of elections this year in countries home to half the world’s population.
The move comes after the Trump administration’s disputed claims about election meddling by Russia and a growing global consensus that fake content could have an impact on the election results. It follows similar attempts to manipulate the public with AI-generated images, videos, and audio that have become increasingly common. The FBI is reviewing whether to prosecute anyone for using the technology in the upcoming midterms, and Congress is considering legislation to establish guidelines.
The flurry of incidents has kept lawmakers, political analysts, and tech luminaries up at night. “It’s clear that the power of this kind of generative technology is being weaponized against people,” said former Google CEO Eric Schmidt in a statement released earlier this week. “It’s an unprecedented time for democracy and the health of our society.”
In the US, a group backing Ron DeSantis, the Republican candidate challenging President Joe Biden for his party’s nomination, used AI to impersonate Mr DeSantis in a video that depicted the future of America if the Democrat won the White House. It included images of illegal immigrants swarming the US border with Mexico, empty office buildings on Wall Street, and riot police failing to maintain order in San Francisco. Politicians, tech companies, and civil rights groups criticized the clip.
OpenAI’s new policies and tools aim to counter this type of manipulation. The company won’t let users build applications for partisan political campaigns or lobbying, and it will block apps that pretend to be real people, including candidates. It will also add a mechanism that would flag applications used to spread information designed to discourage citizens from voting or misrepresenting their eligibility.
The company will also integrate its platform with real-time news reporting globally, including attribution and links. The firm also plans to add digital credentials set by a third-party coalition of AI firms that encode details about the origin of images generated with its DALL-E 3. This will help verify their authenticity, it says.