The increasing threat of deepfakes has prompted the United States Federal Trade Commission (FTC) to take proactive steps in updating regulations aimed at preventing the impersonation of businesses or government agencies by artificial intelligence (AI), with a primary focus on protecting consumers.
FTC takes proactive steps against deepfake threats
Under the proposed regulation, generative artificial intelligence (GenAI) platforms would be prohibited from offering products or services that could potentially harm consumers through impersonation. This move reflects the FTC’s acknowledgment of the growing sophistication of AI-driven scams, including voice cloning, which can deceive individuals and manipulate them into fraudulent activities.
FTC Chair Lina Khan has underscored the urgency of addressing impersonator fraud, especially in light of the increasing prevalence of AI technologies. By expanding the scope of the impersonation rule, the FTC aims to equip itself with stronger measures to combat scams perpetrated through AI-enabled impersonation of individuals or entities.
A notable aspect of the updated regulation is its empowerment of the FTC to initiate federal court cases directly against scammers who exploit AI to impersonate government or business entities. This provision emphasizes the FTC’s commitment to swift intervention and the recovery of ill-gotten gains obtained through fraudulent impersonation.
Once published in the Federal Register, the final rule on government and business impersonation will take effect after 30 days. During the subsequent 60-day public comment period, stakeholders will have the opportunity to provide feedback on the proposed regulation, ensuring that diverse perspectives are considered before its implementation.
Regulatory measures and legal landscape
While federal laws currently do not specifically address the sharing or creation of deepfake images, some lawmakers are beginning to take steps to address this emerging threat. Victims of deepfake manipulation, including celebrities and individuals, may explore legal avenues such as copyright laws, rights related to likeness, and various torts such as invasion of privacy or intentional infliction of emotional distress. However, navigating these legal frameworks can be arduous and time-consuming.
Responding to the growing concern over deepfake technology, the Federal Communications Commission (FCC) recently prohibited AI-generated robocalls by reinterpreting existing rules that forbid spam messages from artificial or pre-recorded voices.
This regulatory action follows a notable incident in New Hampshire where a deepfake of President Joe Biden was used in a phone campaign to dissuade people from voting. In the absence of federal legislation, several states have taken independent action to outlaw deepfakes within their jurisdictions.
The evolving landscape of AI-driven deception underscores the need for comprehensive measures to address emerging threats to consumer safety and security. By updating regulations and enforcement mechanisms, government agencies aim to stay ahead of malicious actors who seek to exploit technological advancements for fraudulent purposes.
As the FTC’s proposed regulation undergoes public scrutiny and refinement, it represents a pivotal step in safeguarding against AI-enabled scams and protecting the integrity of businesses and government institutions. Through collaborative efforts between regulators, lawmakers, and stakeholders, society can better navigate the complex challenges posed by deepfake technology and uphold trust and accountability in an increasingly digitized world.
Land a High-Paying Web3 Job in 90 Days: The Ultimate Roadmap