As the US gears up for the 2024 presidential election, OpenAI shares its plans on suppressing misinformation related to elections worldwide, with a focus set on boosting the transparency around the origin of information. One such highlight is the use of cryptography — as standardized by the Coalition for Content Provenance and Authenticity — to encode the provenance of images generated by DALL-E 3. This will allow the platform to better detect AI-generated images using a provenance classifier, in order to help voters assess the reliability of certain content.

This approach is similar to, if not better than, DeepMind’s SynthID for digitally watermark AI-generated images and audio, as part of Google’s own election content strategy published last month. Meta’s AI image generator also adds an invisible watermark to its content, though the company has yet to share its readiness on tackling election-related misinformation.

OpenAI says it will soon work with journalists, researchers and platforms for feedback on its provenance classifier. Along the same theme, ChatGPT users will start to see real-time news from around the world complete with attribution and links. They’ll also be directed to CanIVote.org, the official online source on US voting, when they ask procedural questions like where to vote or how to vote.

Additionally, OpenAI reiterates its current policies on shutting down impersonation attempts in the form of deepfakes and chatbots, as well as content made to distort the voting process or to discourage people from voting. The company also forbids applications built for political campaigning, and when necessary, its new GPTs allow users to report potential violations.

OpenAI says learnings from these early measures, if successful at all (and that’s a very big “if”), will help it roll out similar strategies across the globe. The firm will have more related announcements in the coming months.

Source link