A consortium comprised of Microsoft, Google, and 17 other prominent entities in the field of artificial intelligence technology has been established with the primary objective of preventing the misuse of their software to deceive voters. These companies are collectively pledging to implement measures aimed at identifying and addressing AI-driven election misinformation. Additionally, they are committed to raising public awareness regarding the potential risks of deception associated with their technology. The 20-company “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” was announced at the Munich Security Conference on Friday. It aims to develop tools for watermarking, detecting, and labeling realistic content created with AI, assessing the models that underlie the software to identify risks for abuse and supporting efforts to educate the public.
The tech industry has come under intense pressure to do more to counteract AI’s potential for political manipulation. In particular, generative artificial intelligence programs that can create text, images, or video to mimic key stakeholders in democratic elections are rapidly becoming more accessible. The resulting ‘deepfakes’ have been able to convince some voters that they are hearing or seeing the words or actions of a political figure they know.
Some companies involved in the new pact already have policies to stop this abuse. For example, Meta, the company that owns Facebook, Instagram, and TikTok, removes posts about electoral processes containing false information and has policies against using its generative AI to make ads. But others, such as the chatbot maker Claude and voice-cloning startup Eleven Labs, are not signatories to the pact. The pact’s emphasis on transparency and education rather than removing specific content also suggests the companies are reluctant to do more to police political speech. This move would be at odds with their broader efforts to promote the power of AI.
The companies will work together to share their research results and work with other groups that are trying to protect democracy from misinformation, including fact-checking organizations and researchers who study AI. They will also work with governments and other companies to coordinate their efforts. The pact is a welcome development, but it is only one step in what could be a long struggle to keep AI from being weaponized against democracy. As more and more people access the technology, we will need to find better ways to educate them about the risks. This isn’t the first time people have tried to use AI to influence elections, and it will likely not be the last. But as the technology becomes more powerful and accessible, the stakes have become higher than ever before. The world must be prepared to deal with this new threat.