On February 6, Meta said it was going to label AI-generated images on Facebook, Instagram, and Threads. When someone uses Meta’s AI tools to create images, the company will add visible markers to the image, as well as invisible watermarks and metadata in the image file. The company says its standards are in line with best practices laid out by the Partnership on AI, an AI research nonprofit.
Big Tech is also throwing its weight behind a promising technical standard that could add a “nutrition label” to images, video, and audio. Called C2PA, it’s an open-source internet protocol that relies on cryptography to encode details about the origins of a piece of content, or what technologists refer to as “provenance” information. The developers of C2PA often compare the protocol to a nutrition label, but one that says where content came from and who—or what—created it. Read more about it here.
On February 8, Google announced it is joining other tech giants such as Microsoft and Adobe in the steering committee of C2PA and will include its watermark SynthID in all AI-generated images in its new Gemini tools. Meta says it is also participating in C2PA. Having an industry-wide standard makes it easier for companies to detect AI-generated content, no matter which system it was created with.
OpenAI too announced new content provenance measures last week. It says it will add watermarks to the metadata of images generated with ChatGPT and DALL-E 3, its image-making AI. OpenAI says it will now include a visible label in images to signal they have been created with AI.
These methods are a promising start, but they’re not foolproof. Watermarks in metadata are easy to circumvent by taking a screenshot of images and just using that, while visual labels can be cropped or edited out. There is perhaps more hope for invisible watermarks like Google’s SynthID, which subtly changes the pixels of an image so that computer programs can detect the watermark but the human eye cannot. These are harder to tamper with. What’s more, there aren’t reliable ways to label and detect AI-generated video, audio, or even text.
But there is still value in creating these provenance tools. As Henry Ajder, a generative-AI expert, told me a couple of weeks ago when I interviewed him about how to prevent deepfake porn, the point is to create a “perverse customer journey.” In other words, add barriers and friction to the deepfake pipeline in order to slow down the creation and sharing of harmful content as much as possible. A determined person will likely still be able to override these protections, but every little bit helps.
There are also many nontechnical fixes tech companies could introduce to prevent problems such as deepfake porn. Major cloud service providers and app stores, such as Google, Amazon, Microsoft, and Apple could move to ban services that can be used to create nonconsensual deepfake nudes. And watermarks should be included in all AI-generated content across the board, even by smaller startups developing the technology.
What gives me hope is that alongside these voluntary measures we’re starting to see binding regulations, such as the EU’s AI Act and the Digital Services Act, which require tech companies to disclose AI-generated content and take down harmful content faster. There’s also renewed interest among US lawmakers in passing some binding rules on deepfakes. And following AI-generated robocalls of President Biden telling voters not to vote, the US Federal Communications Commission announced last week that it was banning the use of AI in these calls.