Hundreds of academics, politicians and AI leaders have signed an open letter calling for anti-deepfake legislation. The signatories declare deepfakes to be “a growing threat to society”, and are asking for governments to “impose obligations” to prevent the profilecttion of harmful AI-generated media.
The letter calls for three main laws to be implemented:
-
Fully criminalize deepfake child pornography
-
Establish criminal penalties for anyone knowingly involved in creating or spreading harmful deepfakes
-
Require software developers and distributers to a) prevent their products from creating harmful deepfakes and b) be held liable if their measures are too easily sidestepped
Among the prominent voices to sign the letter – which stands at 778 signatures at the time of writing – are businessman and politician Andrew Yang, filmmaker Chris Weitz, researcher Nina Jankowicz, academic and psychologist Steven Pinker, actor Kristi Murdock, physicist Max Tegmark, and neuroscientist Ryota Kanai.
The signatories belong to several industries, mainly pertaining to artificial intelligence, academia, entertainment, and politics. Deepfakes pose a threat to each, and as the letter points out, “not all signers will have the same reasons for supporting the statement.”
AI-generated media and its dangers across sectors have been pronounced over the past year. Several members of SAG-AFTRA have signed, months after a lengthy strike punctuated with conversations surrounding the use of AI in the industry. Politicians are scrambling to address the proliferation of deepfakes, with Homeland Security recently beginning to recruit AI experts and countries like India grappling with AI-generated misinformation ahead of election season.
Deepfakes have also been a point of discussion when it comes to nonconsensual pornography. Earlier this year, fake images of Taylor Swift spread on X, highlighting the need for urgent legal and societal change.
The letter is not the first call for action to be taken to regulate and prevent the spread of deepfakes. As TechCrunch points out, the European Union has been working on plans to criminalize AI-generated images and deepfakes depicting child abuse and pornography. In England, the government cracked down on the issue last year, making it easier to prosecute those sharing deepfake porn.
But there is still work to be done globally, legally, and, of course, by tech companies themselves.
Topics
Artificial Intelligence