Advanced Micro Devices, Inc. (NASDAQ:AMD) Barclays Global Technology Conference December 7, 2023 1:25 PM ET
Company Participants
Jean Hu – CFO
Conference Call Participants
Tom O’Malley – Barclays
Tom O’Malley
All right. I’m Tom O’Malley, semi and semi cap equipment analyst here at Barclays. Very pleased to have AMD CFO, Jean Hu, here with us. Thank you so much for joining us.
Jean Hu
Yeah, thank you for having us.
Tom O’Malley
Yeah. Well, I wanted to say, Jean, congrats on being in the CFO seat for a year now. Time flies.
Jean Hu
Almost.
Question-and-Answer Session
Q – Tom O’Malley
Almost a year. But I want to just start with what I know everyone kind of wants to talk about and especially, after your event yesterday, maybe you could start with just highlighting what you announced? Clearly, the stock is reacting today, but yeah, just some of your highlight announcements and we can dive into some of those.
Jean Hu
Yeah. We had a very exciting day yesterday with our customers, the partners, the ecosystem. It’s actually a very important milestone and inflection point for AMD to be a strong player in AI compute space. We all know the opportunities are tremendous, and we are really pleased we get to the point we can be very competitive in this marketplace. There are so many announcements yesterday. So, what I’ll do today is just highlight some of the key things we announced yesterday. You can absolutely listen to the webcast and it’s super exciting. First and foremost is, we formally launched MI300A and MI300X and highlighted the performance advantage we have, especially when you think about the inference, MI300X has the industry-leading memory bandwidth and capacity. So, it furnish better TCOs for customers. You can either run more models or you can use less GPUs. So, it’s a great product, very competitive. And as we said during our last earnings call, MI300 will be the fastest revenue, $2 billion in our history. And we are very confident about the $2 billion-plus revenue number we talk about for 2024.
Secondly, software. This is really important. We announced the ROCm 6. When you think about the performance improvement we have made during a very short period of time between ROCm 6 and MI300X, we actually improved performance by eight times compared to prior generation. So, that’s very exciting. I think one of the most important thing is strategically and the strength of ROCm, it’s really about embracing the open source ecosystem. It’s important, because we allow developers to write portable code to work on a media machine, AMD machine or other GPUs. So, it actually give tremendous flexibility and it’s more cost-effective. It’s exciting and we launched — we’re going to launch ROCm 6 and we enlarge our open source ecosystem partnership significantly. I think we announced during yesterday, OpenAI is going to include MI300X for their Triton 3.0 release very soon. So that’s not what happened in Triton 2 release. So this is a very significant announcement.
And then third thing on networking side, we had a panel from Arista, Broadcom, and Cisco to really talk about the ecosystem, how we work with the Ethernet and Ultra Ethernet Consortium. Everybody’s working on the ecosystem to make sure the Generative AI adoption, the large clusters we can use, furnish the best TCO. So, very exciting. And you all — some of you probably already saw the customer list we have. We have a very broad customer engagement yesterday. We only have the first set of customers showing up, which include Microsoft, Oracle, Meta. On the OEM/ODM side, you know, we have Dell, we have Supermicro, Lenovo, and a lot of more customers that we are engaging with. And one of the example I think it’s important to know is Meta literally, they bring up MI300X at OCP. They literally commented it’s the fastest bring-up from design to deployment in their history. So, that’s really a strong proof point of not only the hardware capability, but software maturity.
So, we’re very excited about the opportunities we have. Of course, in addition to all the hardware, software, networking, we also talk about the PC side. We have the AIPC. AMD actually was the first to include the AI engine in the PC with the Ryzen 7000 series. We actually already shipped millions of units and we announced next generation product, which will include the AI engine. The thing about us is we think about the portfolio, we think about end-to-end AI, and we have a strong portfolio to furnish the whole solutions from PC side to the data center side.
Tom O’Malley
Perfect. Thank you for covering all those. I wanted to just dive into your reiterated view of the $2 billion next year. So, clearly, there’s a handoff between the A and the X series, right? You talked on last earnings, you guided December to $400 million and then by $300 million sales, followed by a flattish March. And the rest of the year gets you to $2 billion. You have some HPC and then you proceed to some hyperscale customers. Is that transition and how it’s so smooth, a function of one customer, or is that all of the customers, maybe all of the ones you had on stage yesterday, all ramping very quickly?
Jean Hu
Yeah, a good question. When we talk about MI300 ramp, we did cite in Q4, predominantly it’s from the supercomputer MI300A. That’s one customer. The deployment of a supercomputer tend to be lumpy. They deploy everything in one quarter, then next quarter, in Q1, it’s actually very minimum. And then Q1, we also said it’s around $400 million. It’s from multiple customers, right? Multiple, not only cloud customers, also enterprise customers and startups. So, we are very pleased with the customer lineup and the production, the commitment from customers.
Tom O’Malley
In terms of the open ecosystem that you mentioned and the qualification cycles for those new customers, there’s been one announced customer for a while and it’s official, with multiple coming on stage. Can you talk about from the time a customer chooses to use your product, how quickly can they get that deployed in their cloud? Particularly, you mentioned, you know, ROCm, you have a software layer and a hardware layer.
Jean Hu
Yeah, great question. It’s very different. Different customer engagement, what kind of workload they want to qualify. It takes different time. Some of our large customers, we have been working with them for years and a lot of you know, Microsoft, we have been working with them for a long time. Meta, we also mentioned that we work with them from MI100, 200 to now, 300. So those are longer-term commitment and a lot of workload. But some of the enterprise customers, because the nature of ROCm, they actually can port it very quickly, months, and we also talked about significantly yesterday is out-of-box encounter with ROCm. So, for a lot of startup, very simple workload. They literally can take it out of box, make it happen. So, it’s a wide range of things. But I think the most important thing is we have made tremendous progress on the software side to mature it to the point different customers can have different encounter and port it very quickly.
Tom O’Malley
Helpful. So I think you’re clearly very positive on that $2 billion outlook for next year. But I think something that consistently comes up with your competitor is capacity. And if you look at how much you could exceed that $2 billion, do you run into a wall at any point where capacity becomes an issue? And more particularly, does memory become an issue next year, just as you’ve heard more and more about constraints in that industry?
Jean Hu
Yeah, capacity is absolutely still very tight. What we have done is we have worked with our supply ecosystem significantly for a long time, not just now, but back to six months, seven months, eight months ago to make sure we can safeguard capacity. We absolutely have more than $2 billion capacity to uphold the revenue upside. And our view is, we really want to make sure we are positioned for success, because we do have a very competitive product portfolio. And if you look at the customer engagement we have, yesterday, it’s only a very small portion of the customers. It took admire two hours. It’s a long, long event. It’s because we just have so many customers who want to come, who want to be part of this announcement. I think there are a lot more engagement beyond yesterday’s companies out there. So, we do feel pretty good about it.
Tom O’Malley
Helpful. So, NVIDIA has put themselves on a path to release new generations on a yearly cadence. They just recently announced their H200 product, but they’re using more HBM to close the gap as well. Can you talk about what your planned cadence is for your platforms? And do you think that this will be more HBM, more HBM, more HBM, or is there a way to boost performance at that rate from the compute perspective that gets you there?
Jean Hu
Yeah, I think the Generative AI market is so different from what we have seen in the past. The cadence of — from both NVIDIA or AMD, it’s really because the customer demand, right? The customer is driving that significant change and adoption and the need, the requirement for hardware solutions. So, for us, I think if you look back, right, it’s — we have been developing GPU for a long time. And even if you look at the MI series of product, 2020, when we had our first MI300, so MI300 — 200 to 300, we also have 250. We actually, within four years, we did have multiple generation of MI300. And with the customer uphold, really, not only on us, on the whole industry, we will accelerate our cadence. I think we have the capability and we are investing in R&D internally. We are reallocating our resource to boost tremendously R&D in data center, especially GPU side. So, we feel pretty good about the team, the resource, and the capability of the company.
Tom O’Malley
I think a common pushback on any new entrant into a market is, particularly, if there’s a software stronghold, it’s, how do you get into these customers? And I think you mentioned with some of the announcements that you talked about yesterday that you’re trying to create an open ecosystem such that your chip can communicate with your customers just admire the world of CUDA and NVIDIA today. When you look at the next year, can you talk about what it would take from your customers to both have success on internal workloads, as well as external workloads? And how quickly do you think your software ecosystem takes a hold where it’s being used across all of the hyperscalers?
Jean Hu
Yeah, that’s a great question. We are the new entrants into a market, and the software development NVIDIA has had for admire a decade or even longer. I think there are two aspects of our strategy. The first is, we have been doing this for not just last two years, actually four years to five years. The most important strategic decision for us is actually our ROCm software stack is open source in nature. We scheme to work with the open source, the ecosystem, and partner together to address this software issue or challenge. So, it’s not just us, we have the whole open ecosystem. If you just look at how fast the development of all the framework, AI framework, the library, the compilers and the models during the last 12 months, it’s tremendous. So, I do think by partnering with the open source ecosystem, and secondly, we also invest aggressively in our own capability and resource. Together, we actually furnish our customers a much more cost-effective, flexible solution, right? Because in the end, when you have this kind of large market opportunities, customers really want to select the hardware they want, and the ecosystem also want to write the model very quickly. So they — a lot of people are writing the model on the AI framework. They’re using the libraries, the compilers, whatever openly available. So, we do think what we are trying to do on the software side is partner with the system, with the ecosystem, and really advance the easy and the flexibility of — everybody can write their model quickly.
Tom O’Malley
Helpful. So, let’s pivot to a market where you weren’t a new entrant, but a minority entrant, then you’ve really grown your share, which is in the DC CPU side. So, you’ve gotten your share up to the high-20s. Can you talk about the time horizon for you to get to that 50/50 share? Is that a goal of yours? And what do you think the business dynamic would be as an impact of that? Is there a margin differential in getting from this range to the 50% range? Is there a way that you need to communicate with customers that’s different? Any help — any details there would be helpful.
Jean Hu
Yeah, maybe the first thing is, let’s take a step back to look at how we get to our market share. So, Q3, we announced our earnings. Our revenue market share actually is 29.7%, so it’s close to 30%. I think how we get there is — some of you probably recall 2017. That’s when AMD had the first server product portfolio, Gen 1. And during the next four years, five years, we continue to execute on the very advanced roadmap to furnish the customer the best TCO, best performance per watt, the best performance per dollar. And each generation, we are leading our competition significantly. So, getting to the market share we have today is because of the technology and the innovation we had. And when you look at the market share we have today, in the cloud market, we are actually getting to close to 50% of the market share. But in the enterprise, we are still at the mid-teens. So, from now on, going forward, we’ll continue to drive the product leadership and make sure we can guide, furnish the best TCO. We’ll continue to gain share in cloud market, but most importantly, we have invested significantly in go-to-market during the last several years. And as you probably know, enterprise market is very different. You have thousands of customers. Each customer, they buy less servers and you literally need to work with each customer, give them TCO and they — once they learned, they can get more performance, they can save power, save space, save operating cost. They are very excited to adopt, but it takes time. So, it’s a process. When we look at the design wins we have, look at the enterprise customer adoption, we do think, going forward, we’ll continue to gain share in cloud, but enterprise share gain, because our effort and the work, we should see that benefit going forward.
Tom O’Malley
So, you do have that technology guide, which has led to a lot of these share gains. But let’s just, create a world in the future. So, next year, Genoa and Bergamo are using 5 nanometer from TSMC. Intel is still on Intel 7. Let’s fast forward. Let’s say Intel’s roadmap is successful. That has them reaching parity, at least in their eyes, fairly soon. If we get into — we don’t need to get into the weeds on the competitive specs, but in a world in which you are more at technology parity, what does that look admire to you from a CFO? And do you see any change in the market in that instance?
Jean Hu
Yeah, I think one thing about AMD is Lisa Su and the team, they always made assumption that the market is competitive. The assumption is our competitors will be as competitive as we are on the process technology. If you look back at the success of the company, one of the things I learned is really it’s not just about the process technology. It’s actually the combination of architecture, 3D packaging, working with the foundry partners, chiplet design, and all those things AMD has really worked very hard for the last decade. So, the combination of those things to give us admire a smaller die, right? So, it’s more cost-efficient and power-efficient. And all those things furnish the TCO advantage. I think from my perspective, really, we needed to stay cutting edge. And we’ll unveil in Turin, our next-generation, Gen 5, next year, which we feel pretty good about the leadership, the performance, the TCO we can furnish.
Tom O’Malley
Perfect. Impressive getting through that question with the feedback there. But do you have a view of the traditional server market into next year? I think you saw cannibalization of wallets this year with the boom of AI. And I think there’s a big disagreement on what the return to growth should look admire, if there is any.
Jean Hu
Yeah, I think, 2023, there’s no doubt, it’s a challenging time for the traditional server market. I think a lot of you track that market. We all know, year-over-year, it actually declined, primarily because the unit volume declined quite significantly. Our view is there are two fundamental reasons. The first one is during the COVID, a lot of companies, just as all the other semiconductor segment, built inventory. So they were digestion of inventory from the COVID side of the oversupply situation. And then second half and later on, really, you’re right, is to your point, the Generative AI, the CapEx need to prioritize AI spending, definitely crowding out some of the server spending. But when we think about it is — if you look at the purpose of a server is to uphold the transaction processing and the basic workload. When you think about Meta, their Instagram, Facebook, WhatsApp, Amazon’s shopping, all the fundamental basic workload we do today, it’s being done by server. And frankly, those data continue to boost, doubling and the need for compute continues. Our view is when you think about how customer deploy hardware, it’s all about best economics, how they can get best TCO.
So, for all those traditional workload, servers offer the best TCO. We think it is the case today and it will be the case in the future. So, what we’re going through the stage is really the digestion, the prioritization or optimization, but the server depreciation life, everybody extended to six years. Beyond six years, what — the challenges you are going to have is, with older server, you actually need more power, you need more space, because when you look at the Genoa, 6th Genoa can exchange 12 Sapphire Rapids. And when you need space, you need the power, you need the efficiency, you really need to upgrade. So, our view is, the server market is going to come back. It’s going to serve the basic traditional workload versus the incremental AI need going forward. It’s not going to be admire high growth market, but ASP will continue to boost as the industry continue to drive the core count boost.
Tom O’Malley
Helpful. I want to pivot to the client business. You guided revenue growth into Q4, but have seen some weaker data coming in from ODMs. So, I guess, a multi-part question. I know you were undershipping market for much of the year, but how is visibility today and are you still seeing the strength that you pointed to at earnings?
Jean Hu
Yeah, I think it’s worth to just do a quick recap about the PC market. It’s a very cyclical market. It’s one of the semiconductor segment that got into cyclical downturn first, right? It literally happened a year ago, and this downturn actually was quite severe. It was one of the worst during the last, whatever, three decades. And so, the first half of the year, when you look at 2023, the whole industry was going through the inventory digestion. So, we both basically sold in much less versus sell out, allow the downstream inventory continue to digest. That’s what’s the first half. And then second half, it’s another different situation, because inventory largely get consumed. So, you do see the industry start to be normalized. And when you look at the Q3 and our Q4 guide, it’s part of a normalization, I think stabilization of what you say is — I think right now, if you look at IDC, our third-party’s forecast for next year is, they do believe next year, the PC market unit volume will come up, but it’s admire a low-single digit. So our view is, it’s normalized, it’s stabilized. Going forward, you will see the typical seasonality in Q1 and then, you know, next year, based on industry and it’s economic-sensitive sector, too. So, we don’t know what the macroeconomic situation will be. But in the end, the PC market is stabilized. That’s the good news. I will say, as we announced yesterday, it’s exciting in the long run with AIPC, because, eventually, a lot of AI work needed to be done at the device level. You cannot do everything at — in the cloud. I think, probably not 2024, but definitely 2025, we’ll see tremendous momentum from AIPC. That will help the replacement cycle.
Tom O’Malley
Helpful. And I just want to hit on pricing in that market as well. Are you seeing any more aggressiveness for your competitor? I think you guys have done a good job of kind of getting ahead of the trend in that business. But share has come back a bit since the pandemic, but it’s kind of stabilized here. What’s going on from a pricing perspective?
Jean Hu
Yeah, I think we gained share in Q3. Yeah, quite a lot of share, actually, in Q3. I think on the pricing side, it is a competitive market. I think our view, our strategy has always been focusing on the premier side of the market, the commercial market, more for mid range to high end, because we need to drive profitable growth, right? If you just grow topline revenue without profitability, then it does not make economic sense. So, I think what you will see us is continue to really drive the profitable growth and focus on the segment we believe we have a competitive advantage.
Tom O’Malley
Helpful. I want to shift again to the embedded business. It was down materially in September, you guided down double digit again into December. The supply chain is mixed. I think you’re hearing maybe some recovery next year just off of these low levels. But I just wanted to get your take on that market, and clearly, there are a variety of end markets within there. If you could spend some time on just what’s causing some strength, what’s causing some weakness?
Jean Hu
Yeah, the embedded business, Xilinx, the acquisition we did, Suresh actually is here. He should talk about it. But I think the performance after acquisition, it was tremendous. Literally, it grew admire 17%, double digit last year. I think what happened is, if you look at this time, the semiconductor inventory correction, it’s actually staged. If you look at the embedded inventory correction, literally, it’s one year behind the PC correction. It’s great. AMD, we have a very diversified portfolio, we have PC gaming, and then we have embedded and data center. So you actually can see the different correction coming at the different time. For embedded, literally, Q3 and Q4, we just get into this very steep correction. We said it’s going to refuse double digit sequentially last earnings call and we guided that we think Q1 is going to be very similar, because it typically takes several quarters. I would say, by different market, definitely, you see the industrial, some of the auto have some inventory, they are digesting. When guide time drop from 52 weeks to 14 weeks, what the customers behavior immediately will be? Hey, let me take this opportunity, admire, digest my inventory. I used to think I need one year, because you have 52 weeks guide time. So, I think it would take several quarters largely. We do have the segment admire aerospace, defense, and also emulation. Those are continuing to perform. So it’s a different picture within the end market. But overall, our belief is, the second half, we should see the recovery. And also when we look at our portfolio, we do have a really strong set of portfolio, we have continued to gain design win market share typically when the market recovery will get a tailwind to help the whole business.
Tom O’Malley
Thinking about all the markets that you just talked about, you have a very robust ramp in the data center business, you have a robust ramp in AI. PC seems stabilized. The embedded business, which has traditionally been decent margin is down, but then up again in the second half or, hopefully, improving in the second half. Why shouldn’t all these factors guide to a better gross margin ramp as well?
Jean Hu
Yeah, you nailed it. It should, and we definitely would drive that gross margin expansion. It’s important for us to continue to enlarge our gross margin. Fundamentally, it’s a reflection of the IP, right, the R&D work our team has been doing. So, the tailwind of embedded business, right now, it’s really headwind. If you look at the embedded business, it was admire 24% of our revenue, and now, it’s admire 16%, 17%. That’s a huge headwind on gross margin side, and we are managing through it, because we do have the PC client side, the gross margin coming back. But the second half of next year, we do think we’ll get more tailwind from embedded and continue to enlarge the gross margin.
Tom O’Malley
Last one, quickly, as we run out of time. You have this big AI ramp in the future. It sounds admire you’re quite positive about it. You have still a large amount of cash on your balance sheet. Can you talk about priorities in terms of where you’re going to spend that money? If this ramp is really coming, should we be buying back our stock right now?
Jean Hu
Yeah. Given the significantly larger opportunity and significant ramp of revenue, investing is absolutely our number one priority. We’ll continue to invest organically and through acquisition. But our business model generates a lot of cash. So, we can continue to do buyback. It’s very important for us to return cash to shareholders, too.
Tom O’Malley
Perfect. Thank you so much for being here. Really appreciate it. Thank you, all.
Jean Hu
Yeah. Thank you.