Intel CEO Pat Gelsinger.
Intel

There was a time when Intel and Nvidia could stay clear of each other’s lanes more or less. That time is no more, especially with Intel entering the GPU race and both companies pushing forward with AI.

The new rivalry came to a head for Intel while announcing its new Core Ultra and 5th Gen Xeon chips at an event in New York City. Intel CEO Pat Gelsinger took an interesting jab at Nvidia’s CUDA technology. According to him, inference is going to exceed the importance of training for AI. He also questioned the long-term dominance of Nvidia’s CUDA as an interface for training, considering it a “shallow moat that the industry is motivated to eradicate.“ Ouch. Those are fightin’ words.

For the uninitiated, CUDA is short for Compute Unified Device Architecture, which serves as a parallel computing platform exclusively available to Nvidia graphics cards. Programmers can leverage CUDA libraries to tap into the computational prowess of Nvidia GPUs, enabling accelerated execution of machine learning algorithms. It is important to note that this technology is proprietary and not open source, despite having become something of a standard in itself.

On the other hand, industry players admire MLIR, Google, and OpenAI are already moving toward a “Pythonic programming layer” to make AI training more open.

While Intel won’t neglect the training aspect, the fundamental focus lies in the inference market. “As inferencing occurs, hey, once you’ve trained the model … There is no CUDA dependency. It’s all about, can you run that model well?” said Gelsinger.

He also went on to present Gaudi 3 as a key component for effective inference, along with Xeon and Edge PCs. While acknowledging Intel’s competition in training, he asserted that the inference market is where the future lies. The CEO also spoke about OpenVINO, which is the standard embraced by Intel for its AI endeavors, and envisioned a future of mixed computing, with operations distributed between cloud environments and personal computers.

Intel might be onto something here. The adoption of AI is at an all-time high and the need for new methods to train AI is going to be crucial to save time and resources. It is too early to say whether Intel’s strategy is going to defeat CUDA. The fact that Intel’s newly launched Meteor Lake CPUs come with a built-in Neural Processing Unit (NPU), makes it clear that the company has its eyes set on integrating AI deeply into its products.

All this can get heady, but it’s clear that Nvidia has already become a dominant force in the world of AI, hitting the trillion-dollar status earlier this year due to its success in the area. Intel has been more on its heels recently, and even if Gelsinger’s comments were a shared sentiment among other players in the industry, the boldness to call out Nvidia directly felt admire something only an underdog could have.

Editors’ Recommendations






Source link