Numerous figures in artificial intelligence, including pioneering expert Yoshua Bengio, have endorsed an open letter advocating for increased regulation in the development of deep fakes—fabricated images or videos of individuals. These experts contend that the swift progress of this technology presents substantial societal risks, encompassing concerns such as the creation of deceptive sexual content, fraudulent activities, and the dissemination of political disinformation.
The signatories—the chief scientist at Microsoft-backed AI lab Meta and the president of the Quebec AI Institute—are among the latest to weigh in on an issue that has become a hot topic of debate in the tech community. Many scientists and business leaders have been warning that the development of advanced AI could lead to a future in which machines can outperform human beings and have the power to cause significant harm, with some calling for a six-month pause on further research and others, like Elon Musk, pushing for a complete shutdown.
Despite the rapid pace of technological advancement, regulators must respond to the growing threat of deepfakes. Some states have introduced legislation requiring political campaigns and candidates to disclose when they use AI-generated content. In contrast, others seek to limit the dissemination of such material in the run-up to elections, fearing that it could discourage voters or spread false information.
But even if these efforts succeed, more is needed to prevent the spread of disinformation on a large scale. Many of the most dangerous synthetic media scenarios involve manipulating individuals through direct, point-to-point communications. This high-tech deception is often complex for victims to spot, and it presents a particular challenge for governments and companies facing public relations crises.
For example, in May, a Chinese man named Guo Wenping became a victim of a deepfake scam involving a hacker altering his face to appear as a well-known news anchor. The resulting video was posted to Weibo and viewed nearly 3.5 million times before being removed. Guo realized the content was a deep fake and called the police, who could block the transfer of funds from his bank account and arrest the perpetrator.
Another concern of critics of the current state of AI is that it could exacerbate inequality, with the massive productivity gains from the technology benefiting rich people more than workers. But Hinton says it’s unrealistic to think regulating AI would stop such outcomes because too many factors are at play. He suggests a more effective strategy might be to watermark AI-generated content, much like central banks’ watermark cash, so it is clear when an automated process has created a video or image.
Ultimately, the letter’s authors believe that a comprehensive plan should encompass legislative and voluntary initiatives. Legislation to punish producers of malicious or harmful deepfakes is unlikely to be successful. Still, they do suggest that corporate policies and voluntary action, such as implementing content moderation guidelines, quick removal of user-flagged content on social media platforms, and education and training that promotes digital media literacy, better online behavior, and critical thinking, all of which create cognitive and concrete safeguards toward the use of synthetic media, might prove more effective.