Mark Zuckerberg, chief executive officer of Meta Platforms Inc., during the Meta Connect event in Menlo Park, California, US, on Wednesday, Sept. 27, 2023.
Enlarge / Mark Zuckerberg, chief executive officer of Meta Platforms Inc., during the Meta Connect event in Menlo Park, California, on September 27, 2023.

On Thursday, Meta CEO Mark Zuckerberg announced that his company is working on building “general intelligence” for AI assistants and “open sourcing it responsibly,” and that Meta is bringing together its two major research groups (FAIR and GenAI) to make it happen.

“It’s become clearer that the next generation of services requires building full general intelligence,” Zuckerberg said in an Instagram Reel. “This technology is so important, and the opportunities are so great that we should open source and make it as widely available as we responsibly can so that everyone can benefit.”

Notably, Zuckerberg did not specifically mention the phrase “artificial general intelligence” “AGI” by name in his announcement, but a report from The Verge seems to suggest he is steering in that direction. AGI is a somewhat nebulous term for a hypothetical technology that is equivalent to human intelligence in performing general tasks without the need for specific training. It’s the stated goal of Meta competitor OpenAI, and one that many have feared might pose an existential threat to humanity or replace humans working intellectual jobs.

On the definition of AGI, Zuckerberg told The Verge, “You can quibble about if general intelligence is akin to human-level intelligence, or is it like human-plus, or is it some far-future super intelligence. But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition.” He suggested that AGI won’t be achieved all at once, but gradually over time.

Business as usual?

Mark Zuckerberg, chief executive officer of Meta Platforms Inc., fist-bumps a mysterious hand during the Meta Connect event in Menlo Park, California, on September 27, 2023.
Enlarge / Mark Zuckerberg, chief executive officer of Meta Platforms Inc., fist-bumps a mysterious hand during the Meta Connect event in Menlo Park, California, on September 27, 2023.

Zuckberg’s Instagram announcement makes the potential invention of truly general AI seem like a casual business development—it’s nothing to be particularly worried about. In fact, it’s apparently so harmless and beneficial that they might even open-source it and share it with everyone (“responsibly,” of course).

His statement is now part of a trend of downplaying AGI as an imminent threat. Earlier this week during an interview at the World Economic Forum in Davos, OpenAI CEO Sam Altman said that AI “will change the world much less than we all think, and it will change jobs much less than we all think,” and that AGI could be developed in the “reasonably close-ish future.”

This relatively calm, business-as-usual tone from Zuckerberg and Altman is a far cry from the drumbeat of world-threatening hype we heard throughout 2023 after the launch of Bing Chat and GPT-4 (and to be fair, Zuckerberg never joined the AI doom club). Even Elon Musk, who signed the six-month pause letter, is promoting a large language model in the form of Grok.

Perhaps cooler heads will prevail now—and maybe some lowering of expectations is in order as we see that, in many ways, large language models, as interesting as they are, might not be fully ready for widescale reliable use. And they might not also be the path to AGI, as Meta Chief AI Scientist Yann LeCun often likes to say.

Elsewhere in Zuckerberg’s announcement, he said that Llama 3 is in training (a follow-up to Llama 2) and that Meta is amassing a monstrous GPU capacity for the training and running of AI models—”350,000 Nvidia H100s, or around 600,000 H100 equivalents of compute, if you include other GPUs,” he said.

Here is a transcript of Zuckerberg’s full statement in his Instagram Reel:

Hey everyone. Today, I’m bringing Meta’s two AI research efforts closer together to support our long term goals of building general intelligence, open sourcing it responsibly, and making it available and useful for everyone in all of our daily lives. It’s become clearer that the next generation of services requires building full general intelligence—building the best AI assistants, AIs for creators, AIs for businesses, and more—that means advances in every area of AI. From reasoning to planning to coding to memory and other cognitive abilities.This technology is so important and the opportunities are so great that we should open source and make it as widely available as we responsibly can so that everyone can benefit.

And we’re building an absolutely massive amount of infrastructure to support this. By the end of this year, we’re going to have around 350,000 NVIDIA H100s, or around 600,000 H100 equivalents of compute, if you include other GPUs. We’re currently training Llama 3, and we’ve got an exciting roadmap of future models that we’re going to keep training responsibly and safely too.

People are also going to need new devices for AI, and this brings together AI and Metaverse. Because over time, I think a lot of us are going to talk to AIs frequently throughout the day. And I think a lot of us are going to do that using glasses, because glasses are the ideal form factor for letting an AI see what you see and hear what you hear, so it’s always available to help out. Ray-Ban Meta Glasses with MetaAI are already off to a very strong start, and overall across all this stuff, we are just getting started.

Listing image by Benj Edwards | Getty Images


Source link