This week, a surge of nonconsensual pornographic images featuring American megastar Taylor Swift, presumably created by artificial intelligence, quickly circulated on various social media platforms, causing distress among her fans. The AI-generated images, or deepfakes, mainly proliferated on X, formerly known as Twitter, but also made their way to Meta-owned Facebook and Instagram. “They reached millions of people before they were taken down,” said Mason Allen, head of growth at deepfake-detecting group Reality Defender. Most of the deepfakes were football-related and showed a painted or bloodied Swift that objectified her and, in some cases, implied the infliction of violent harm, he added.
The explicit deepfakes have once again reignited calls from lawmakers to protect women and crack down on the platforms and technology that spread such images. For example, one image shared by an X user was viewed 47 million times before the account was suspended Thursday. X, Facebook, and Reddit have since removed the images. But lawmakers from both parties say the problem has only gotten worse as advancements in artificial intelligence make it easier for bad actors to create and share fake, sexually explicit images.
On Thursday, Congresswoman Yvette Clarke, D-N.Y., slammed the online dissemination of the deepfakes and urged lawmakers from both parties to find a solution. “For years, women have been victims of deepfakes without their consent,” she wrote on her official Twitter feed. “With technological advances, protecting our citizens from this abuse is even more important.”
According to experts, Lawmakers haven’t yet devised a comprehensive way to tackle the problem of deepfakes. Some states have restricted pornographic and political deepfakes, but they’re not being effectively enforced, and the tools that allow for the creation of these fakes continue to evolve. Platforms can try to limit the proliferation of these images by requiring users to report them. Still, that method is ineffective because by the time the fakes are reported and removed, millions of people have already seen them.
One of the alleged creators of the Taylor Swift deepfakes, Toronto man Zubear Abdi, has made his X account private after his name was leaked on social media. He has been accused of leaking other women’s addresses online and has been banned from Twitter for posting “derogatory, racist, homophobic, or otherwise discriminatory comments.”
Sources close to the star say she is not only “furious” at the images but could sue the alleged perpetrators and the notorious Celeb Jihad website that posted them. She might file a lawsuit for libel, defamation, cyber harassment, identity theft, or revenge porn.