While I agree with the Lex editorial team’s statement that without specialist artificial intelligence chips there would be no generative AI boom (“Nvidia: one-stop chip shop”, Lex, November 23), I think the key to broad-based adoption is to overcome the efficiency challenge around deploying these models.

Hardware and software are two pieces of the same puzzle — neither gives you the full picture. Nvidia’s advance into AI software demonstrates an awareness of this fact. Given that, as the Lex columnists note, “demand already outstrips supply”, we also need an optimised approach to hardware design that ensures the hardware can be used efficiently, to bring AI to the wider market.

More powerful chips and more memory will always be needed, but if this isn’t combined with architecture that ensures the chips are used effectively, the number of chips required, along with their energy demands, will quickly become unmanageable.

Reaching the total addressable market means demonstrating to enterprise customers that you can create value and boost productivity — driving results from silicon to business applications, using models that enterprises can own themselves. To do that, the industry must accept a full-stack approach to AI.

Rodrigo Liang
Chief Executive and Co-founder, SambaNova Systems, Palo Alto, CA, US

Source link