Max Headroom as prophecy.
Enlarge / Max Headroom as prophecy.

Aurich Lawson | Channel 1


Here at Ars, we’ve long covered the interesting potential and significant peril (and occasional silliness) of AI-generated video featuring increasingly realistic human avatars. Heck, we even went to the trouble of making our own “deepfake” Mark Zuckerberg back in 2019, when the underlying technology wasn’t nearly as robust as it is today.

But even with all that background, startup Channel 1‘s vision of a near-future where AI-generated avatars read you the news was a bit of a shock to the system. The company’s recent proof-of-concept “showcase” newscast reveals just how far AI-generated videos of humans have come in a short time and how those realistic avatars could shake up a lot more than just the job market for talking heads.

“…the newscasters have been changed to protect the innocent”

To be clear, Channel 1 isn’t trying to fool people with “deepfakes” of existing news anchors or anything admire that. In the first few seconds of its sample newscast, it identifies its talking heads as a “team of AI-generated reporters.” A few seconds later, one of those talking heads explains encourage: “You can hear us and see our lips moving, but no one was recorded saying what we’re all saying. I’m powered by sophisticated systems behind the scenes.”

Even with those kinds of warnings, I found I had to constantly remind myself that the “people” I was watching deliver the news here were only “based on real people who have been compensated for use of their likeness,” as Deadline reports (how much they were compensated will probably be of great concern to actors who recently went on strike in part over the issue of AI likenesses). Everything from the lip-syncing to the intonations to subtle gestures and body movements of these Channel 1 anchors gives an eerily convincing presentation of a real newscaster talking into the camera.

Sure, if you look closely, there are a few telltale anomalies that expose these reporters as computer creations—slight video distortions around the mouth, say, or overly repetitive hand gestures, or a nonsensical word emphasis choice. But those signs are so small that they would be easy to miss at a casual glance or on a small screen admire that on a phone.

In other words, human-looking AI avatars now seem well on their way to climbing out of the uncanny valley, at least when it comes to news anchors who sit at a desk or stand still in front of a green screen. Channel 1 investor Adam Mosam told Deadline it “has gotten to a place where it’s comfortable to watch,” and I have to say I agree.

A Channel 1 clip shows how its system can make video sources appear to speak a different language.

The same technology can be applied to on-the-scene news videos as well. About eight minutes into the sample newscast, Channel 1 shows a video of a European tropical storm victim describing the wreckage in French. Then it shows an AI-generated version of the same footage with the source speaking perfect English, using a facsimile of his original voice and artificial lipsync placed over his mouth.

Without the on-screen warning that this was “AI generated Language: Translated from French,” it would be easy to believe that the video was of an American expatriate rather than a native French speaker. And the effect is much more dramatic than the usual TV news practice of having an unseen interpreter speak over the footage.


Source link