Until now, the chips powering your computer have largely come from Intel, AMD or Apple. But Qualcomm, a company whose chips have primarily been for phones, believes its chips will start powering your computers soon too.
The reason is generative AI, which brings new creative clout to tasks like producing text, editing photos and concocting illustrations, and which is the buzz in Silicon Valley thanks to ChatGPT, Bing and other attention-grabbing tools. Generative AI is now being built into Qualcomm’s most powerful chips, starting with the new Oryon CPU. It won’t run enormous AI models like ChatGPT, but other AI models do, and Qualcomm hopes that’ll give your next computer a leg up when accelerating other AI tasks.
To make its ambition a reality, Qualcomm has partnered with HP and other PC makers revealed at Qualcomm’s annual Snapdragon Summit in Hawaii. It expects you’ll be able to buy computers running its new chips in the middle of 2024.
The new processors could be a major force in improving personal computing. Apple’s M series of computer processors demonstrated compelling speed and battery life, and Qualcomm could help bring similar advantages to Windows machines. Its processors, like Apple’s, are members of the Arm technology family that has branched out from phones to tablets, cars and even some supercomputers. Qualcomm’s previous Arm-based laptop chips have been lackluster, but the Oryon design is an entirely new design.
Qualcomm is touting big performance upgrades, too, for its Snapdragon X Elite, a new processor that incorporates the Oryon (pronounced like “Orion”) CPU, a graphics processing unit (GPU), and a neural processing unit (NPU) for AI. With dual processing cores that can run at up to 4.3GHz for shorter bursts, and it delivers up to twice the performance of competing Intel i7 10-core and 12-core processor-powered laptops while consuming a third of the power, Qualcomm says. The company also says the Snapdragon X Elite outpaces Intel’s 14-core i7.
On stage, Qualcomm CEO Cristiano Amon showed a graph stating that the Qualcomm Oryon outperformed Apple’s M2 and Intel’s i9-13980HX silicon in single-threaded CPU performance, and matched their peak performance at 30% and 70% less power, respectively. The Oryon is capable of 50% faster peak multithreaded CPU performance over the Apple M2 chip.
“The Oryon CPU is the new CPU leader in mobile computing. It’s been designed by Qualcomm from the ground-up to have an unprecedented level of performance at extremely low power,” Qualcomm CEO Cristiano Amon said on stage here. “There’s a new sheriff in town.”
Including AI accelerators is now tables stakes for processors makers. Apple’s M series of laptop chips already have AI acceleration technology. Intel is touting its Meteor Lake processors, due in laptops shipping in December, are the brains of a new generation of “AI PCs,” Intel says.
The Snapdragon X Elite’s Adreno GPU is capable of up to 4.6 teraflops of graphics processing power, and it supports external displays up to 4K at 120Hz in HDR10 with either three UHD or two 5K external displays.
But Qualcomm’s big swing is for on-device AI, and the Snapdragon X Elite — combining an NPU, CPU and GPU — can reach 75 trillion operations per second in bursts and can run at 45 TOPS for sustained calculations.
The benefit of all this AI hardware depends on support from software makers, too. Adobe software like Lightroom and Photoshop use AI, and Microsoft and Meta are working on their own improvements.
Qualcomm had already announced it’s teaming up with Microsoft and Meta on the Llama 2 generative AI, and the X Elite chip supports 13 billion parameters for Llama 2 at up to 30 tokens per second, as well as support for the more common 7 billion parameter AI. As Qualcomm GM of mobile compute and XR Alex Katouzian pointed out, humans can only read about 200 to 300 words per minute which corresponds to five to seven tokens per second.
“Our on-device AI can write faster than you can read,” Katouzian said.
Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.