The Federal Communications Commission has stepped up efforts to protect American voters from scams and voter suppression calls before the November elections, banning robocalls that feature artificial intelligence-generated voices. Under the new rules, those who make non-emergency calls that use AI-generated voices must have express, written consent from the person they are calling. According to an FCC statement, callers who do not follow the rule will face steep fines on a per-call basis and can be blocked from making future calls. In addition, the rules also give state attorneys general new tools to prosecute bad actors who misuse cloning technology to spread misinformation, commit fraud, and interfere with elections.
It’s a crucial development in the fight against unsolicited robocalls, which have plagued Americans for years. According to the National Do Not Call Registry, complaints about robocalls dropped by more than 2.6 million last year. However, imposter scams and robocalls that mimic the voices of politicians, celebrities, and others remain top consumer concerns. The bogus calls often target vulnerable family members, mimic celebrities and other well-known personalities, or encourage people to donate money to a fake charity in exchange for winning a prize.
To combat these calls, the commission’s new ruling interprets the Telephone Consumer Protection Act of 1991 more broadly to cover all robocalls. The FCC chairman, Jessica Rosenworcel, says the callers must have express consent from the people they’re calling before using an artificial or pre-recorded voice simulated or generated by AI. “Bad actors are using artificially generated voices in unsolicited robocalls to extort vulnerable family members, impersonate celebrities, and misinform voters,” she said.
While misleading audio and visuals created using AI aren’t new, recent advancements have made them more accessible to create and more challenging to detect. For example, a viral video that appears to capture an explosion at the Pentagon was filmed by a computer program. In the same vein, a viral photo widely believed to be a picture of a child in a schoolhouse turned out to be a stock image edited with computer software.
These calls — including those that spoof famous personalities, threaten incarceration, or solicit political contributions — violate the Telephone Consumer Protection Act and can be traced to companies in Texas, where the robocalls originated. The FBI is investigating and has already filed a civil lawsuit against the company behind the robocalls that impersonated President Joe Biden in an attempt to discourage people from voting in the state’s primary.
The FCC’s new rules will be effective immediately, and violators will be liable for fines on a per-call basis and blocked service from phone carriers. However, security experts say the rules may have a limited impact. Jon Polly, PSP, chair of the ASIS International Emerging Technology Community Steering Committee, says that while creating safeguards for consumer protection is a positive step, it will be difficult to enforce in the context of international robocalls not conducted in the United States.