While the Microsoft-backed start-up hopes the latest move will establish AI ‘provenance’, the metadata associated with an image is incredibly easy to remove.
OpenAI has said that images generated by the company’s Dall-E 3 text-to-image tool will now include embedded metadata to help users identify if an image has been generated by the AI model.
The standard being used by the ChatGPT creator is called C2PA, or the Coalition for Content Provenance and Authenticity, which is open technical standard that allows publishers, companies and others to embed metadata in media for verifying its origin and related information.
C2PA is not restricted to images generated by AI – the same standard is used by many camera manufacturers and news organisations to certify the source and history of media content.
“Images generated with ChatGPT on the web and our API serving the DALL·E 3 model, will now include C2PA metadata,” OpenAI wrote in an update on its website yesterday (6 February).
“People can use sites like Content Credentials Verify to check if an image was generated by the underlying DALL·E 3 model through OpenAI’s tools. This should indicate the image was generated through our API or ChatGPT unless the metadata has been removed.”
Currently applied to the web version of the tool, the change will also roll out to all mobile users of Dall-E 3 by 12 February.
Images generated in ChatGPT and our API now include metadata using C2PA specifications.
This allows anyone (including social platforms and content distributors) to see that an image was generated by our products. https://t.co/kRv3mFnQFI pic.twitter.com/ftHqECS8SB
— OpenAI (@OpenAI) February 6, 2024
However, OpenAI cautioned that metadata such as C2PA is not a “silver bullet” to address issues of provenance, information that describes the origin and history of a piece of digital content to establish its authenticity.
Such metadata, OpenAI said, can be easily removed from the image either by accident or intentionally.
“For example, most social media platforms today remove metadata from uploaded images, and actions like taking a screenshot can also remove it. Therefore, an image lacking this metadata may or may not have been generated with ChatGPT or our API,” the company wrote.
“We believe that adopting these methods for establishing provenance and encouraging users to recognise these signals are key to increasing the trustworthiness of digital information.”
Last July, the White House secured “voluntary commitments” from seven major AI makers including Google, Meta and OpenAI to develop mechanisms such as watermarking to ensure users know when content is generated by AI.
“This action enables creativity with AI to flourish but reduces the dangers of fraud and deception,” the White House said at the time. “The track record of AI shows the insidiousness and prevalence of these dangers, and the companies commit to rolling out AI that mitigates them.”
Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.