The first U.S. presidential election in the era of deepfakes is presenting officials with challenges never before seen at a time when tech giants are scaling back their cyberdefenses.

Fabricated images and audio clips designed to sway voters are raising alarms in the days leading up to the Republican primary in South Carolina on Feb. 25 and Super Tuesday on March 5, when both parties will hold primaries in a number of states.

“Are protections in place sufficient to thwart” the influence of targeted deepfakes this year? That’s a question posed by Kathleen Hall Jamieson, director of the Annenberg Public Policy Center at the University of Pennsylvania. “The capacity for individuals and nation-states to generate more misleading content that is microtargeted and harder to detect could happen,” Jamieson said in an interview.

A broad swath of tech companies acknowledged the threat in a major way late Friday. Alphabet Inc.’s GOOGL GOOG Google, Amazon.com Inc.
AMZN,
-1.43%
,
Facebook parent Meta Platforms Inc.
META,
-0.33%
,
Microsoft Corp.
MSFT,
-0.31%
,
OpenAI, X, Adobe Inc.
ADBE,
-0.87%
,
International Business Machines Corp.
IBM,
-2.24%
,
TikTok and others signed a pact to voluntarily adopt “reasonable precautions” to prevent AI tools from being used to disrupt democratic elections worldwide.

In recent weeks, OpenAI, Google and Meta have taken steps to limit the abuse of AI in elections.

AI-generated deepfakes have started making their way into presidential campaign ads and elections. Last month, a doctored robocall from a deepfaked President Joe Biden attempted to discourage voting during the New Hampshire primary. That robocall was detected by security experts and covered by U.S. media, but others have probably gone undiscovered.

The threats appear to be more ominous, if not dangerous, outside of the U.S. Days before Slovakia’s elections in November, AI-generated audio recordings impersonated a liberal candidate discussing plans to raise beer prices and rig the election. The false narrative spread quickly across social media.

This month, Meta’s oversight board of independent academics, lawyers and experts who oversee onerous content decisions on the platform criticized Meta for its “incoherent” and “confusing” policies on manipulated media after an altered video of Biden spread on Facebook.

Meta decided not to remove the edited video, which showed Biden apparently touching his granddaughter inappropriately. In reality, Biden was placing an “I Voted” sticker on her chest.

“The inability to trust our senses could lead to distrust and paranoia, further breaking down social and political relations between people,” said Sohrob Kazerounian, a distinguished AI researcher at Vectra AI.

Evolution of political meddling online

As technology has evolved, so have the methods employed by individuals and nation-states to manipulate it during elections — dating back to robocalls, targeted hit mail and specious internet rumors. One such robocall targeted Republican candidate John McCain before the 2000 South Carolina primary.

The rise of social media played a decisive role in influencing the 2016 presidential election, when Russian-sponsored actors overran platforms with troll content.

Following the brouhaha, officials at Facebook, Twitter and others shored up defenses that led to clean campaign seasons in 2018 and 2020.

Then came the Jan. 6, 2021, insurrection, stoked in great part by social media. Now, with the emergence of AI, twinned with drastic cutbacks at belt-tightening social-media companies, all bets are off in 2024.

Since 2020, X, Meta and YouTube have laid off thousands of employees and contractors, some of whom included content moderators.

“Even if fully staffed, it is still not enough when half of the world is to vote amid the spread of nuanced, tailored deepfakes,” Jevin West, an associate professor at the University of Washington, said in an interview.

The reduction of trust and security teams, combined with a fusillade of deceptive content, “erode trust in a democratic system” and “leads voters to confusion and misperception,” according to West.

“My biggest concern is this has set the stage for things to be worse in 2024 than in 2020,“ West said.

Solutions that Congress and the Federal Election Commission are exploring have yet to be turned into legislation or rules, leaving the onus on states such as Colorado, Minnesota and Wisconsin to push online public-education efforts to promote election officials as a trusted source of election information in 2024.

The Federal Communications Commission this month unanimously banned unsolicited robocalls with AI-generated voices because the technology can mislead people. The FCC said AI-generated voices in unsolicited robocalls are prohibited under the 1991 Telephone Consumer Protection Act, which restricts marketing calls that use artificial and prerecorded voice messages. Robocalls must offer a way for people to opt out of future calls, the FCC said.

Read more: AI-generated voices in robocalls can deceive voters. The FCC just made them illegal.

Ultimately, generative AI will have an impact on elections, experts agree. The question is whether that impact will be good or bad.

“All we hear about is nefarious AI use,” election lawyer Jessica Furst Johnson said in an interview. “But because it is new, it can also be used by election teams to communicate with voters. We don’t know really know how it will be used.”

Check out On Watch by MarketWatch, a weekly podcast about the financial news we’re all watching — and how that’s affecting the economy and your wallet. MarketWatch’s Jeremy Owens trains his eye on what’s driving markets and offers insights that will help you make more informed money decisions. Subscribe on Spotify and Apple.  

Source link