Spurred by the growing threat of deepfakes, the FTC is seeking to modify an existing rule that bans the impersonation of businesses or government agencies to cover all consumers.

The revised rule — depending on the final language, and the public comments that the FTC receives — might also make it unlawful for a GenAI platform to provide goods or services that they know or have reason to know is being used to harm consumers through impersonation.

“Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale,” FTC chair Lina Khan said in a press release. “With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever. Our proposed expansions to the final impersonation rule would do just that, strengthening the FTC’s toolkit to address AI-enabled scams impersonating individuals.”

It’s not just folks like Taylor Swift who have to worry about deepfakes. Online romance scams involving deepfakes are on the rise. And scammers are impersonating employees to extract cash from corporations.

In a recent poll from YouGov, 85% of Americans said that they were very concerned or somewhat concerned about the spread of misleading video and audio deepfakes. A separate survey from The Associated Press-NORC Center for Public Affairs Research found that nearly 60% of adults think AI tools will increase the spread of false and misleading information during the 2024 U.S. election cycle.

Last week, my colleague Devin Coldewey covered the FCC’s move to make AI-voiced robocalls illegal by reinterpreting an existing rule that prohibits artificial and pre-recorded message spam. Timely in light of a phone campaign that employed a deepfaked President Biden to deter New Hampshire citizens from voting, the rule change — and the FTC’s step today — are the current extent of the federal government’s fight against deepfakes and deepfaking technology.

No federal law squarely bans deepfakes. High-profile victims like celebrities can theoretically turn to more traditional existing legal remedies to fight back, including copyright law, likeness rights and torts (e.g. invasion of privacy, intentional infliction of emotional distress). But these patchwork laws can be time-consuming — and laborious — to litigate.

In the absence of Congressional action, ten states around the country have enacted statutes criminalizing deepfakes — albeit mostly non-consensual porn. No doubt, we’ll see those laws amended to encompass a wider array of deepfakes — and more state-level laws passed — as deepfake-generating tools grow increasingly sophisticated. (Case in point, Minnesota’s law already targets deepfakes used in political campaigning.)


Source link