There have been many warnings, including by President Joe Biden, about how generative AI can be used to manipulate audio and video to create deepfakes that show people — politicians, among them — saying or doing things they didn’t actually say or do.
If you’re among those who think, ‘Phooey, those concerns are just overblown,’ then consider three recent deepfakes involving musician Taylor Swift, X owner Elon Musk and Biden.
Swifties will know that she’s a fan of Le Creuset cookware. “Her collection of the cookware has been featured on a Tumblr account dedicated to the pop star’s home décor, showcased in her gift choices at a fan’s bridal shower and shown in a Netflix documentary that was highlighted by Le Creuset’s Facebook page,” reported The New York Times.
But her love of colorful enameled cookware didn’t prompt her to pitch the pricey pots and such in ads, which showed up on Facebook and Tiktok. The ads, using her voice and face, were created by AI and had Swift supposedly telling her fans that she was “thrilled” to offer free cookware sets to those who answered a few questions before trying to reel them in with the true scam.
Noted The NYT, “The ads sent viewers to websites that mimicked legitimate outlets like the Food Network, which showcased fake news coverage of the Le Creuset offer alongside testimonials from fabricated customers. Participants were asked to pay a ‘small shipping fee of $9.96’ for the cookware. Those who complied faced hidden monthly charges without ever receiving the promised cookware.”
In the case of Musk, a fake version of the billionaire entrepreneur was shown promoting a phony stock trading scheme, called Quantum AI, on Facebook to Australians interested in getting “rich quick.” The deepfake Musk is shown on video saying, “The latest platform, Quantum AI, will help people get rich quick, not work for every penny” and calling out other billionaires — Richard Branson, Jeff Bezos and Bill Gates — as “prominent shareholder before the reporter directs viewers to “make a minimum investment of $400″ on the Quantum AI website,” according to a report by RMIT News.
Celebrities images and voices being co opted to scam consumers, unfortunately, isn’t new because scamming is so successful – consumers are cheated out of billions of dollars each year. The Federal Trade Commission says that people lost nearly $8.8 billion to fraud in 2022 – and that’s before gen AI tech really ramped up.
Beyond Swift and Musk, scammers have copied celebrity chef Gordon Ramsay as part of an identity theft scheme, created a fake Oprah Winfrey to pitch keto gummy bear supplements and generated a fake Tom Hanks touting dental plans. But gen AI tech, including text-to-video and text-to-audio converters, makes it much, much easier for scammers to quickly create seemingly real-looking deepfakes. The Better Business Bureau issued a warning in April 2023, telling consumers to be on guard when it came to celebrity endorsements since “ever-improving AI technology, [makes] these phony endorsements are more convincing than ever.”
Many of these celebrity deepfakes proliferate on social media sites, the BBB said, so be aware. The bureau invites consumers to file a report here if you’ve been scammed or targeted by a scam.
As far as elections go, the New Hampshire Department of Justice issued an advisory a day ahead of that state’s primary on Jan. 23 after someone sent out a robocall pretending to be voiced by President Biden that encouraged voters not to vote in the New Hampshire presidential primary election. The scammer then told people who got the robocall to call the number belonging to the scammer if they wanted to “be removed from the calling list” so then you could be added to their list for future disinformation and scams, I guess. The attorney general’s office for the state called the robocall an attempt to “suppress New Hampshire voters,” which it is.
It’s only funny until someone loses a democracy.
Here are the other doings in AI worth your attention.
AI won’t steal all the jobs because the ROI isn’t there — yet
In the latest study of how AI may or may not affect the future of work, CSAIL researchers at the Massachusetts Institute of Technology said that it’s not cost-effective to replace humans with AI across a variety of industries – at least not yet.
“While there is already evidence that AI is changing labor demand, most anxieties about AI flow from predictions about ‘AI Exposure’ that classify tasks or abilities by their potential for automation,” the five researchers wrote. “The previous literature on ‘AI Exposure’ cannot predict this pace of automation since it attempts to measure an overall potential for AI to affect an area, not the technical feasibility and economic attractiveness of building such systems.”
They concluded, after studying how advancements in computer vision might affect jobs, that “at today’s costs US businesses would choose not to automate most vision tasks that have ‘AI Exposure,’ and that only 23% of worker wages being paid for vision tasks would be attractive to automate.”
But there’s a caveat: “This slower roll-out of AI can be accelerated if costs fall rapidly or if it is deployed via AI-as-a-service platforms that have greater scale than individual firms.”
For me, the TL;DR is that all that magical thinking that gen AI can replace workers very soon remains just that, magical thinking. Says the MIT researchers: “AI job displacement will be substantial, but also gradual — and therefore there is room for policy and retraining to mitigate unemployment impacts.”
I’ve written a lot about how jobs may be affected by AI, including in this overview about why you should pay attention and start experimenting with chatbots like ChatGPT. While Goldman Sach also says job concerns may not be as dire as some predict — they noted in a widely cited March 2023 report that 60% of today’s workers are employed in occupations that didn’t exist in 1940 – they still say that AI will cause “significant disruption” to the labor market in the next six years.
Mark Zuckerberg makes the pitch for open-source AI models
Meta CEO Mark Zuckerberg shared thoughts on his company’s investment in AI and why he thinks other companies should also open source their tech as Meta did with its LLaMA large language model with tech insider site The Verge. The conversation centered on building an artificial general intelligence, a system capable of handling any task that a human can do — and possibly doing those tasks better. That’s different from generative AI (see definitions below.)
On defining AGI: “I don’t have a one-sentence, pithy definition. You can quibble about if general intelligence is akin to human level intelligence, or is it like human-plus, or is it some far-future super intelligence. But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition,” Zuckerberg said, adding, “I’m not actually that sure that some specific threshold will feel that profound.”
On the competition for AI talent: “We’re used to there being pretty intense talent wars. But there are different dynamics here with multiple companies going for the same profile, [and] a lot of VCs and folks throwing money at different projects, making it easy for people to start different things externally.”
On who controls AI and the need to make AGI models, like Meta’s Llama, available as open source: “I tend to think that one of the bigger challenges here will be that if you build something that’s really valuable, then it ends up getting very concentrated. Whereas, if you make it more open, then that addresses a large class of issues that might come about from unequal access to opportunity and value. So that’s a big part of the whole open-source vision.”
On industry players eschewing open source and now calling for AI regulation: “There were all these companies that used to be open, used to publish all their work and used to talk about how they were going to open source all their work. I think you see the dynamic of people just realizing, ‘Hey, this is going to be a really valuable thing, let’s not share it,'” Zuckerberg said.
“The biggest companies that started off with the biggest leads are also, in a lot of cases, the ones calling the most for saying you need to put in place all these guardrails on how everyone else builds AI. I’m sure some of them are legitimately concerned about safety, but it’s a hell of a thing how much it lines up with the strategy.”
How AI is changing how we ask our questions about our health
Raise your hand if you’ve ever turned to Google to diagnose a medical issue. With AI, expect even more of us to turn to ChatGPT and other tools to get answers to our health questions.
CNET’s Jessica Rendall explains that AI is changing the way we are investigating our health — for better and for worse. The way ChatGPT “can quickly synthesize information and personalize results raises the precedent set by “Dr. Google,” the researcher’s term describing the act of people looking up their symptoms online before they see a doctor. More often we call it “self-diagnosing,” she reports.
For people with chronic and sometimes mysterious health conditions that have left them with no good answers after numerous attempts to get a diagnosis, AI may be a game changer — analyzing a list of symptoms to suggest possible causes.
But there are a few concerns, the biggest of which is that AI’s can hallucinate, or give you information that sounds true but actually isn’t true. Another concern is “the possibility you could develop “cyberchondria,” or anxiety over finding information that’s not helpful, for instance diagnosing yourself with a brain tumor when your head pain is more likely from dehydration or a cluster headache,” Rendall said.
Still, ChatGPT can be helpful in translating medical jargon into simple English so patients can have more meaningful interactions with their doctors. Adds Rendall, “Arguably the best way to use ChatGPT as a ‘regular person’ without a medical degree or training is to make it help you find the right questions to ask.”
‘Flawless’ novel wins literary prize with help from ChatGPT
Advocates of gen AI, who say the tech can enhance human achievement and enable humanity to reach new heights, scored a win this week after a Japanese author won a prestigious literary award with a novel deemed by one judge to be “flawless,” according to The Times.
How did Rie Kudan, whose work The Tokyo Tower of Sympathy earned the Akutagawa Prize, achieve such perfection? Kudan said it was due in part to ChatGPT. At an awards ceremony this week, the 33-year-old author said that about 5% of her book was created by OpenAI’s popular chatbot and quoted verbatim in the novel, The Telegraph added.
“Set in a futuristic Tokyo, the book revolves around a high-rise prison tower and its architect’s intolerance of criminals, with AI a recurring theme,” The Daily Mail noted. The Telegraph said, “It centers around an architect who designs a comfortable high-rise prison, but finds herself struggling in a society that seems excessively sympathetic to criminals.”
Kudan said she confides her innermost thoughts to ChatGPT — including sentiments she says she would never talk to anyone else about — and that its responses “sometimes inspired dialogue in the novel,” according to The Telegraph.
Not all authors are as enamored with working with a generative AI as Rudan. The Authors Guild, which represents novelists such as John Grisham, George R.R. Martin, Jodi Picault and Scott Turow, filed suit against OpenAI in September and amended its complaint in December.
And award-winning author Salman Rushgie has been saying he thinks that gen AI tools still have a long way to go before they can mimic the artistry of human writers. At a literary event in October, he noted that someone used an AI to generate 300 words in his style “and what came out was pure garbage.”
“The greatest writers, the best writers have a vision of the world that is personal to themselves, they have a kind of take on reality which is theirs and out of which their whole sensibility proceeds,” Rushdie told the Big Think. “Now to have all that in the form of artificial intelligence — I don’t think we’re anywhere near that yet.”
One artist is using prompts to create drawings, with a pen
In a creative play on AI prompts and text-to-image converters, New York graphic designer Pablo Delcan created a “non-AI generative AI model.” It’s a website called Prompt-Brush 1.0 where you submit a text prompt and Delcan will do a black-and-white line drawing of your idea and send it back to you. Some of the ideas submitted and illustrated, charmingly I think by Delcan, include a UFO beaming up a slice of pizza, a smiling old man, a gray-and-white tuxedo cat and a grim reaper frustrated with his laptop. He’s posted a selection of the more than 631 images he’s created and has requests for over 1,000 images in the queue, according to It’s Nice That.
Delcan told It’s Nice That that it takes him about a minute to create each drawing and that after spending the past year “immersed in the world of AI, this seemed like a way to poke some fun at that.” His sense of humor is evident in the “site metrics” he shares and in his description of the “technology” behind his service: “A brush is used to draw by dipping it into black ink and then moving it across a piece of paper to leave marks. Light touches make thin lines, while pressing harder makes thick lines. It’s possible to make all sorts of drawings by connecting these lines.”
I’ve submitted my request and will post when I hopefully get an original Delcan back.
AI term of the week: AGI
Artificial general intelligence (AGI) is the Holy Grail of AI — a system that can do any task that a human can do and possibly do those tasks better. What’s the difference between an AGI and say gen AI models like ChatGPT? I think of ChatGPT as a tech that you can talk to that mimics or predicts human responses — it provides answers to questions like an autocomplete on steroids, while AGI is more akin to HAL from 2001: A Space Odyssey or JARVIS from Iron Man.
Here are a few definitions of AGI, which by the way doesn’t yet exist — at least on Earth. Read through all of these and then check out the final line from Google Deepmind’s description below to get a true sense of how complicated all this stuff is.
Luce Innovative Technologies compares AI, generative AI and AGI: “AI refers to the field in general, generative AI focuses on the creation of new content and general AI aims to develop artificial intelligence systems that are as capable as humans in a variety of cognitive tasks. General AI, also known as AGI (Artificial General Intelligence) or ASI (Artificial Super General Intelligence), is a long-term goal and has not been fully achieved.”
Market research firm Gartner describes AGI as “a form of AI that possesses the ability to understand, learn and apply knowledge across a wide range of tasks and domains. It can be applied to a much broader set of use cases and incorporates cognitive flexibility, adaptability and general problem-solving skills.”
IBM says “strong artificial intelligence (AI), also known as artificial general intelligence (AGI) or general AI, is a theoretical form of AI used to describe a certain mindset of AI development. If researchers are able to develop Strong AI, the machine would require an intelligence equal to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future.”
And last but not least, Google Deepmind describes AGI as “an important and sometimes controversial concept in computing research, used to describe an AI system that is at least as capable as a human at most tasks. Given the rapid advancement of Machine Learning (ML) models, the concept of AGI has passed from being the subject of philosophical debate to one with near-term practical relevance. Some experts believe that ‘sparks’ of AGI are already present in the latest generation of large language models (LLMs); some predict AI will broadly outperform humans within about a decade; some even assert that current LLMs are AGIs. However, if you were to ask 100 AI experts to define what they mean by ‘AGI,’ you would likely get 100 related but different definitions.”
Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.