Following the viral spread of pornographic AI images of singer Taylor Swift, government leaders are addressing the creation and sharing of sexualized AI-generated deepfakes.

On Jan. 30, a bipartisan group of Senators introduced a new bill that would criminalize the act of spreading nonconsensual and sexualized “digital forgeries” created using artificial intelligence. Digital forgeries are defined as “a visual depiction created through the use of software, machine learning, artificial intelligence, or any other computer-generated or technological means to falsely appear to be authentic.”

Currently known as the Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024 (or the “Defiance Act“), the legislation would also provide a path to civil recourse for victims who had their images depicted in nude or sexually explicit images. Through the bill, victims could sue “individuals who produced or possessed the forgery with intent to distribute it” or anyone who received the material knowing it was not made with consent, the Guardian reported.

“The volume of deepfake content available online is increasing exponentially as the technology used to create it has become more accessible to the public,” wrote judiciary chair Richard J. Durbin and Rep. Lindsey Graham. “The overwhelming majority of this material is sexually explicit and is produced without the consent of the person depicted. A 2019 study found that 96 percent of deepfake videos were nonconsensual pornography.”

Senate majority whip Dick Durbin explained in a press release that the bill’s quick introduction was spurred explicitly by the viral images of Swift, and the White House’s demand for accountability. “This month, fake, sexually explicit images of Taylor Swift that were generated by artificial intelligence swept across social media platforms. Although the imagery may be fake, the harm to the victims from the distribution of sexually explicit ‘deepfakes’ is very real.”

In a statement from White House press secretary Karine Jean-Pierre on Jan. 26, the Biden administration expressed its desire for Congress to address deepfake proliferation amid weak enforcement from social media platforms. “We know that lax enforcement disproportionately impacts women and also girls, sadly, who are the overwhelming targets of online harassment and also abuse,” Jean-Pierre told reporters. “Congress should take legislative action.”

The creation of deepfake “porn” has been criminalized in other countries and some U.S. states, although widespread adoption is yet to be seen. Mashable’s Meera Navlakha has reported on a worsening social media landscape that’s disregarded advocates’ ongoing demands for protection and accountability, writing, “The alarming reality is that AI-generated images are becoming more pervasive, and presenting new dangers to those they depict. Exacerbating this issue is murky legal ground, social media platforms that have failed to foster effective safeguards, and the ongoing rise of artificial intelligence.”

A climate of worsening media literacy — and the steep rise of digital misinformation and deepfake scams — prompts an even greater need for industry action.

If you have had intimate images shared without your consent, call the Cyber Civil Rights Initiative’s 24/7 hotline at 844-878-2274 for free, confidential support. The CCRI website also includes helpful information as well as a list of international resources.


Source link