A robocall created with artificial intelligence that impersonated President Joe Biden and targeted voters in New Hampshire earlier this month is just the latest example of how rapidly advancing AI tools are a growing threat to elections — and more broadly to society.
Deepfakes such as the Biden robocall or fake recordings in Slovakia’s elections last year are worrying misinformation researchers and political experts as AI is democratized and used to create multimodal content — not just doctored text and images, such as the Taylor Swift deepfakes, but also audio and video.
And when multiple pieces of fake content related to the same subject are pushed out, it can create a more believable narrative.
“You can imagine building a universe in the last days leading up to an election that looks convincing,” said Ryan Calo, a University of Washington law professor and tech policy expert. “That’s the kind of thing that Jevin and I lose sleep over.”
Calo spoke on a panel at the Rainier Club in Seattle on Monday with UW information school professor Jevin West. They are co-founders of the UW’s Center for an Informed Public, which studies the spread of strategic misinformation.
The quality of deepfakes and other AI-generated content will almost certainly get better and harder to detect, West said.
This can also have an impact on legitimate content and how people could falsely claim artificiality — in other words, calling something fake when it’s actually real.
Above all, West said he’s most concerned about “lowering levels of trust in our system” at a time when public trust in government is at historic lows.
However, he is encouraged by some of the work going into watermarking or other technology to identify fake content.
“There is big business in fighting misinformation,” West said.
But educating the general public about how to decipher authentic information from fake content will be a challenge, said David Frockt, a former Washington state senator who tried to enact legislation against deepfakes.
“I really worry about the disinformation that’s out there and how almost there’s nothing we can do about it,” he said.
The panelists did point to some potential positive benefits of AI on government, such as helping constituents find buried civic data or better predicting weather and traffic patterns.
They also discussed if and how regulators should step in and help curb the impact of AI-generated misinformation — or if that responsibility lies with tech companies developing the AI tools and providing a platform for the content.
Calo said it’s clear that laws and legal institutions will need to adapt to changes brought by AI. He said it took far too long for people to recognize the harms of the internet — and hopes that we don’t repeat that delay again.
“We pretended for decades that this was the first multibillion-dollar human activity that did not have a substantial downside, and now we are living through an era of misinformation, privacy harms, security issues, and so on,” he said. “Let’s not make the same mistake about AI.”