It declared the use of such technology in unsolicited robocalls would be outlawed, effective immediately – a move that grants state authorities the power to prosecute individuals or entities responsible for these deceptive calls.
FCC chairwoman Jessica Rosenworcel said in a statement on Thursday (08.02.24): "Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters
"We're putting the fraudsters behind these robocalls on notice."
The decision follows a concerning trend where robocalls, mimicking the voices of celebrities and political figures, have been on the rise.
It comes after voters in New Hampshire received robocalls impersonating US President Joe Biden before the state’s presidential primary.
The calls, discouraging voters from participating in the primary, were linked to two Texas-based companies, triggering a criminal investigation.
FCC authorities have stressed there is the potential for these calls to confuse consumers and voters by spreading misinformation while impersonating public figures or even family members.
While state attorneys general already had the authority to prosecute for scams or fraud related to robocalls, the new regulation explicitly makes the use of AI-generated voices in such calls illegal.
The proactive step by the FCC was prompted by a letter from attorneys general representing 26 states, urging the agency to restrict the use of AI in marketing phone calls.
It aligns with the FCC's November 2023 Notice of Inquiry seeking nationwide input on the use of AI technology in consumer communications.
Globally, concerns about AI-based deepfakes, manipulating video or audio content, have escalated.
Instances of deepfake audio targeting senior politicians in the UK, Slovakia, and Argentina have raised alarms.
The National Cyber Security Centre in the UK has warned of the threats posed by AI fakes as countries, including the US, UK, and India, navigate major elections.