The Facebook and Instagram parent is also building tools to label AI-generated images from Google, OpenAI and Microsoft that users post on social media.

Meta has revealed plans to set up a dedicated team that will combat disinformation and AI misuse ahead of the upcoming EU elections in June.

In an update detailing plans published yesterday (25 February), Meta said it removes the most serious kinds of misinformation from Facebook and Instagram – both companies it owns – that could contribute to imminent violence, physical harm or voter suppression.

For content that does not violate these policies, the social media giant said it works with 22 independent fact-checking partners across the EU to review and rate the content. Meta is now expanding this programme with three new partners across Bulgaria, France and Slovakia.

“When content is debunked by these fact checkers, we attach warning labels to the content and reduce its distribution in Feed, so people are less likely to see it,” the company wrote.

Meta said that between July and December last year, more than 68m pieces of content viewed in the EU on Facebook and Instagram had fact checking labels – a tool that the company says remains unused by 95pc of users.

“Ahead of the elections period, we will make it easier for all our fact-checking partners across the EU to find and rate content related to the elections because we recognise that speed is especially important during breaking news events.”

Meta is also activating an elections operations centre dedicated to the EU that will bring together experts from across the company: intelligence, data science, engineering, research, operations, content policy and legal teams.

The objective is to identify potential threats and put “specific mitigations” in place across the family of apps and technologies in real time.

For content generated by AI, Meta has a rating option called “Altered” that includes audio, video or photos that are fake or have been manipulated or transformed in some way.

“When it is rated as such, we label it and down-rank it in feed, so fewer people see it. We also don’t allow an ad to run if it’s been debunked. “For content that doesn’t violate our policies, we still believe it’s important for people to know when photorealistic content they’re seeing has been created using AI,” the company wrote.

Meta is now also building tools to label AI-generated images from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock that users post to Facebook, Instagram and Threads.

“If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label, so people have more information and context.”

The latest move makes Meta the second major social media platform to create dedicated elections plans for the EU.

Earlier this month, TikTok shared details on a new Election Centre in its app next month for each of the 27 EU member states to provide “trusted and authoritative information” for users. The ByteDance-owned company will also establish a dedicated “mission control” space in its Dublin office ahead of EU elections.

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Source link