The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On September 10, 2024
2 Sources
[1]
Advanced Micro Devices, Inc. (AMD) Goldman Sachs Communacopia and Technology Conference - (Transcript)
Advanced Micro Devices, Inc. (NASDAQ:AMD) Goldman Sachs Communacopia and Technology Conference Call September 9, 2024 3:25 PM ET Okay. We'd like to get started. Good afternoon, everyone. My name is Toshiya Hari. I cover the semiconductor space for Goldman Sachs. I am very honored, very happy, very excited to have Dr. Lisa Su from AMD, Chair and CEO. I'm pretty sure everyone knows Lisa, so we will go straight into questions, skip the intro. So I think this time last year we were on this stage, we kind of kicked off the conversation by me asking you what are your key priorities and you said something along the lines of AI number one, AI number two, AI number three. I think you've executed really well since last year. You've grown your data center GPU business from essentially zero last year to, per your guidance, $4.5 billion this year. Reflecting back in what ways have you and your team outperformed your expectations again specifically in the field of AI? And going forward again what are your some of your focal points? Lisa Su Yes, absolutely. Well, again, thank you for having me. It's been a remarkable year. I would say so much has happened. I think we're all in technology, we're moving faster than ever. And in the last year, I mean, if you look at what we've been able to do, we've launched MI300X in December. It's had just tremendous customer traction and customers have been really excited about it. We have several large hyperscalers, including Microsoft, Meta, Oracle, that have adopted MI300 as well as all of our OEM and ODM partners. When I think about, though, what do I believe we've done the best over the last, let's call it, nine months or so, it's really been the progress on software. That was always a big question around how hard is it to get people into the AMD ecosystem. And we've just made tremendous progress with our overall ROCm software stack. We've now worked with some of the most challenging and largest models, and we've seen them get performance, in some cases, with certain workloads even better than the competition, which is exciting. And then we're continuing to build out the entire infrastructure of what we need. So we just recently announced several software acquisitions, including the acquisition of Silo AI, which is a leading AI software company. And we just recently announced the acquisition of ZT Systems, which also builds out sort of the rack scale infrastructure necessary. So sitting here and talking about priorities going forward, certainly, AI is a huge priority for us. But when I think about AI, it's actually end-to-end AI. It's, of course, the data center component is very important. But I'm a big believer in there's no one size fits all in terms of computing. And so our goal is to be the high-performance computing leader that goes across GPUs and CPUs and FPGAs and also custom silicon as you put all of that together. So I think lots of opportunity, lots of focus on the road map going forward, but it's been a pretty exciting year. Toshiya Hari That's great. You shared a 2027 AI accelerator TAM forecast of $400 billion earlier this year. A lot has happened since then. How have your long-term expectations evolved since that time? To the extent you are more bullish on the opportunity set, which applications, which end markets have you seen the most upside, if you will? Lisa Su Yes. When we originally talked about a $400 billion TAM in the 2027 time frame, I believe many thought that, that was high. And actually I think as time has passed over this last year, I think we feel very good about that overall TAM. And I think the main reasons for that is we are still so early in this AI computing cycle. And whether you're talking about training of large language models or you're talking about inferencing or you're talking about fine tuning or you're talking about all of these things, the workloads will demand more compute. And for that reason, we feel very good about the overall market. Now within that market, when we talk about the accelerator TAM, it's not only GPUs. We believe GPUs will be the largest piece of that $400 billion TAM, but there will also be some custom silicon associated with that. And when we look at our opportunity there, it really is an end-to-end play across all of the different compute elements. So from that standpoint, we feel good about it. We're also seeing many people have said inference will continue to increase over time. We're certainly seeing that. Training is very, very important, but inference is increasing over time. And then the fact that you actually see some mixture of the workloads, where people are doing inference and continuous training as you think about how to really tailor these models. Those are all important trends that we're seeing that are leading to the belief that the TAM growth will be there. Toshiya Hari Got it. I have one hardware question and then a software question. On the hardware side, you announced at COMPUTEX, I believe, that you'll be transitioning to a one-year product cadence in data center GPUs. I'm curious what catalyzed this change? Was it based on customer feedback? Are they asking for higher frequency if you will or was it a competitive response? Lisa Su Yes. Definitely, when we look at the road map today for AI and we have announced a one-year cadence, we've accelerated our investments in both hardware and software as well as systems. It is all customer-driven. We spend a lot of time with our largest hyperscalers and our overall partners. And what we see in the ecosystem is the idea that people have different data center needs. Of course, you have the largest hyperscalers who are building out these huge training clusters, but you also have a lot of need for inference and you have a lot of need, some are more memory-intensive workloads that would really focus there, some are more power data center infrastructure constrained. And so they want to reuse some of their data center infrastructure. And so what we've been able to do with our MI325,that's planned to launch here in the fourth quarter, and then the MI350 series and the MI400 series is really just broaden the different products such that we are able to capture a majority of the TAM with our product road map. So lots of conversations with customers on what they need and where they're going and ensuring that we're aligning our road map and our investments with that going forward. Toshiya Hari Software used to be one of the sticking points for AMD and when I would have conversations with investors that was sort of the commonly asked question. You touched on this a little bit at the very top of the session, but where do you see yourselves today from a software perspective given the recent iteration of ROCm. You've also made M&A moves, if you will, from a software perspective. Like where are you today and what's still to do going forward? Lisa Su Yes, absolutely. Look, software has been a huge priority for us. And if you think of all of the steps, ROCm has been around for a while. Actually, ROCm is our version of the ecosystem, and we use sort of open-source ecosystem. But what has been necessary is for us to really practice ROCm in the most difficult environment. So over the last 9 or 10 months, we've spent a tremendous amount of time on leading workloads. And what we found is with each iteration of ROCm, we're getting better and better. So in terms of the tools, in terms of all the libraries, in terms of knowing where the bottlenecks are in terms of performance. So if I just give you an example, customers that we worked with, let's call it, early on, we've been able to demonstrate some of the most challenging workloads that we've consistently improved performance. And in some cases, we've reached parity. In many cases, we've actually exceeded our competition, especially with some of the inference workloads because of our architecture, we have more memory bandwidth and memory capacity. And what that is it's really good for large models when you can fit them on a single GPU versus having to go into multiple GPUs. But the key is, with the software is, how long does it take to get to performance. Because time is money in this world. And whereas, with earlier versions of ROCm, it might have taken a couple of months for workloads to get performant. We're seeing, in the latest iterations of ROCm, like there was one company that we were recently working with, which was very much using PyTorch as their framework foundation. And we saw, in this case, it was out-of-the-box performant on PyTorch, and within a week, exceeding our competition. So it just shows you that there's been a ton of heavy lifting on ensuring that the entire software ecosystem is there and we're not done. I mean that's part of the reason that we announced the acquisition of Silo AI, which is a very, very talented team that is really there to help our customers migrate to the AMD ecosystem as fast as possible. Toshiya Hari Okay. Great. You mentioned time is money. You also announced the acquisition of ZT Systems recently. I know the deal hasn't closed. But what specific capabilities and competitive advantages do you attain once ZT is integrated in AMD vis-a-vis you going at it as you are today? Lisa Su Yes. So maybe if I take a step back and talk about what we think success factors are in the AI world, I think with our size and scale, we believe that we can be one of the most strategic computing partners to the largest hyperscalers as well as the largest enterprises. And as we spent time with our customers and really looked at what would be necessary sort of three to five years down the road, it was clear that the hardware road map is super important. We've made significant investments in there. The software road map we just talked about with ROCm, we've made significant investments there. But the rack scale infrastructure, because these AI systems are getting so complicated, really needs to be thought of in design sort of at the same time in parallel with the silicon infrastructure. So we're very excited about the acquisition of ZT. As you said, it hasn't closed yet, so we expect to close in the first half of 2024. What we see is a couple of major factors in terms of really addressing the future. And these are the largest scale AI systems. The first is just designing the silicon and the systems in parallel. So the knowledge of what are we trying to do on the system level will help us design a stronger and more capable road map. So that's certainly a big advantage. The second reason that we're quite excited about it is, back to this comment of time is money. The amount of time it takes to really stand up these clusters is pretty significant. And we found, in the case of MI300, we finished our, let's call it, our validation, but our customers needed to do their own validation cycle. And much of that was done in series, whereas now, with ZT as part of AMD, we'll be able to do much of that in parallel. And that time to market will allow us to go from, let's call it, design complete to large-scale systems, running production workloads in a shorter amount of time, which will be very beneficial to our customers. And the largest thing is, look, we believe collaboration is key. And so this is an area where there is no one size fits all as it relates to a system environment either. Different hyperscalers want to optimize different things in their systems environment and we want to be able to have the skill set to do that and do that really with, what I would call, best-in-class talent with the ZT team. Toshiya Hari And again, was this an example of a customer or a customer sort of coming to you and say, hey, why not sort of make this move to time up, speed up your process or how did it sort of come about, if you will? Lisa Su Yes. I would say it's actually the opposite. It's actually, if you think about, and I've said this before, Toshiya, like everything that we do is really making bets for what we think are important three to five years from now. And so the work that we're doing today on sort of the MI300, 325, 350 series was actually decisions made a few years ago, our decision to focus on chiplet architectures and really do that. This is also a bet for what we think the future is going to be like. And we spend a lot of time with our largest customers. And when I look at what our priority is, look, we can build great technology which, I think, we are doing. But by really making it easier for customers to adopt, it's time to market, it's ease of adoption, and it's adding more value into the equation, it became clear that we wanted more systems capability. And again, ZT is one of the leaders in AI systems, and they're also, similarly, their customers are very much our customers, and so it made it a very logical choice. Toshiya Hari Got it. I have a ton more AI questions, but I want to shift gears a little bit. The server CPU market, which continues to be a very important market for AMD, went through an extended correction. The market finally seems to have turned the corner from a demand perspective. What are your forward expectations for server CPU? And how would you differentiate what you're seeing in sort of the cloud hyperscale space versus enterprise? I think some of your customers are increasingly sort of worried about things like space and power consumption. Could innovation like Genoa and Turin sort of catalyze a replacement cycle in server CPUs? Lisa Su Yes, absolutely. I am pretty happy with some of the server CPU market trends. I think what we've seen is traditional compute is important. So as important as accelerated compute is, there are lots of workloads that run on traditional CPUs. And from an upgrade cycle standpoint, although there was a little bit of a sort of a delay in the upgrade cycle. We are seeing customers upgrade today, and that is both cloud and enterprise. I think from the cloud standpoint, it's very, very beneficial to upgrade some of the infrastructure that is four or five years old. You get a significant power savings. You get a significant space savings and overall TCO benefits. Genoa or our Zen 4 family is extremely well positioned, and so we've seen very strong adoption with the new capabilities there. We're very excited about our Zen 5 cycle. Our turn cycle is coming up shortly. We'll be launching that here in the fourth quarter, and we see lots of excitement around that as well. And then going forward, as we think about just sort of decisions that people make, whether you're talking about cloud or enterprise environment, I think people are just becoming much, much smarter about what a difference it makes if you talk about the underlying silicon. So whether you're making a choice of something that's cloud optimized or, let's call it, performance optimized, we actually expanded our CPU portfolio because we believe that different variants would get you better TCO. And we're seeing that play out with our customers. Toshiya Hari Got it. In terms of the competitive landscape in server CPU five, six, seven years ago, you were low single-digit market share, I believe. And today, from a revenue standpoint, I think you're in the low 30s. I do think you've had massive success on the hyperscale side. You're at or above 50%, I believe. On the enterprise side, it's been a little bit slower. But at the same time, you've been much more vocal in terms of the penetration or sort of the momentum you have. So like what are your thoughts on the enterprise side? And what needs to happen for you to sort of inflect higher and for your market share position to mirror what you have in hyperscale? Lisa Su Yes. I mean, it's been really exciting to see kind of how the data center market has grown for us as a business. When you think about where we started, the data center business, as you said, we were low single-digit share. It was a similar percentage of our revenue. In our last quarter, I think data center was over 50% of our revenue. So we really are a data center first company. And when you look underneath that, customers are really adopting when they need the best technology. So for the hyperscalers, I think their adoption rate was faster and earlier, especially on first-party workloads, because it was so clear that the TCO advantage of adopting AMD was so clear. As you look at enterprise and some of the, let's call it, the third-party adoption, they've had many other things on their mind, and so they weren't necessarily focused on CPU versus CPU. But at this point, it's all about TCO, and it's all about efficiency. And one of the things that we've done is the more we have interacted directly with end enterprise clients. They want the best technology. And so we've put more field application engineers in place. We've done quite a bit more of these larger complex POCs for customers to try in their environment. We're helping customers with, again, software support. There's not a lot of software support that's needed on the CPU side, but there's some for people to get comfortable. And we've seen the adoption increase on the enterprise side. So if you talk about our market share being in, let's call it, low 30s revenue percentage, on the hyperscaler side, we're well above that. And on the enterprise side, we're well below that. And I think we have a lot of opportunity to continue to grow in enterprise. Toshiya Hari And there's really no fundamental reason why your enterprise share should be so much lower than hyperscale from a technology perspective? Lisa Su Yes. From a technology standpoint, I think we feel extremely good about our competitive positioning and it is really about being a trusted supplier. One of the things that we find in the data center is customers want to know that they can count on you, count on your road map, count on your reliability, all of those things. And I think we've demonstrated that over the last few years. Toshiya Hari Many of your cloud customers have custom CPU and accelerator programs that are running, some are way ahead, some are fairly nascent. How do you see the mix of merchant versus custom evolving over the long run, again, both on the CPU side and sort of the accelerator side? And as a supplier of, for the most part, merchant silicon, how do you sort of plan or how do you strategize competing with essentially some of your customers? Lisa Su Yes. I find this to be an interesting question because people are always wondering, well, is it going to be X or Y? And I say, look, it's going to be both. I mean, you absolutely, when I think about the investments that we're making in a competitive CPU and GPU road map, they're like huge. And we're getting economies of scale over all of that investment in architecture, in software, in yields and reliability and all of those things. And our largest hyperscaler customers want to leverage that scale. Like that's a good thing. And so we expect that our job is to continue to move, let's call it, the merchant road map as fast as possible to get all those efficiencies of TCO and new technology, new architectures going forward. It's as expected, there should be custom silicon. I think custom silicon will come into play. It will typically come into play for, let's call it, less performance-sensitive applications. So that's where you see sometimes, let's call it, good enough performance can be done in custom silicon or in areas, especially on the accelerator side, where it's a more narrow application. So if you don't need a lot of programmability, if you're not upgrading your models every 12 months, in that case, you made a trend towards that. But that being the case, when we think about, for example, our $400 billion accelerator TAM, we think the vast majority of that will remain GPUs. And then I also look at it as an opportunity to partner closer with our largest customers. I don't view it as competition. I really view it as partnership because we also have a semi-custom capability, which allows, if you look at what we've done, for example, in our game console business with Sony and Microsoft, what we say is, hey, come use our IP and figure out how you want to differentiate yourselves. And I believe that, that's a very effective model when you get into a time frame. When the models and the software are a bit more mature, in which case, that would be an opportunity for us. Toshiya Hari Okay. So something like that we might be able to see on the data center side? Lisa Su I do believe so, yes. Yes. So, look, I think at the end of the day, we're all about how do we drive more value in our overall technology equation. And again, we have very deep partnerships with all of our IP investments. There are definitely ways that we can do even more together with our largest customers. Toshiya Hari Got it. On AI PCs, from a financial markets perspective, CES was very much sort of an AI PC Fest and then COMPUTEX was also another one. More recently, I think, sentiment on our side, if you will, has come down a little bit. What are your thoughts on AI PCs? What are you focused on as it pertains to killer apps? And how would you characterize your competitive position in AI PC's vis-a-vis traditional PCs? Lisa Su Yes. I believe that we are at the start of a multiyear AI PC cycle. So again, you guys are always trying to go a little bit too fast. So we never said AI PCs was a big 2024 phenomena. AI PCs is a start in 2024. But more importantly, it's the most significant innovation that's come to the PC market in definitely the last 10-plus years. And I view it as a very, very natural thing. If you're thinking about PCs as a productivity tool, you can definitely use AI. And in this case, we call it AI PCs have these NPUs that are in the silicon, you can definitely use this AI technology to make your PCs more useful. So why wouldn't people want to adopt AI PCs? It is one of those things where you have to do a lot of hardware, software, co-optimization. We've done a tremendous amount of work with Microsoft on their Copilot+ initiative. They just announced last week at IFA that they will have, let's call it, x86 support for ours and other technologies later this year. We think this is the beginning of the AI PC cycle. So next year, as we think about commercial PCs and commercial refresh cycle, we actually see AI PC as a driver of that commercial refresh cycle. Toshiya Hari Okay. And then from a competitive standpoint, I think, historically, you've been better positioned on the consumer side and maybe a little bit less on the commercial side. Going forward with AI PCs, could that be sort of a catalyst for you to improve your position on the commercial side? Lisa Su Yes. Again, on the PC side, we have traditionally been underrepresented overall, but particularly in the commercial PC side. One of the things that, as we have really focused on sort of future go-to-market, our investments in the enterprise and commercial go-to-market have increased quite a bit. I think we lead with server CPUs. Server CPUs, the value proposition is very, very strong for AMD. And then we find that many of these enterprise customers are pulling us into their AI conversations. Because, frankly, enterprise customers want help, right? They want to know, hey, how should I think about this investment? Should I be thinking about cloud or should I be thinking about on-prem or how do I think about AI PCs? And so we found ourselves now in a place of more like a trusted adviser with some of these enterprise accounts. And so I do believe that when you look at the overall choices that enterprise CIOs have to make between their traditional compute, what should they do, cloud versus on-prem, to their AI compute? How much is being done on CPUs versus how much is being done on GPUs? How much of that you have to worry about sort of privacy and security and all of that stuff? To AI PCs when to adopt? I think all of those are part of a broader commercial go-to market that I believe is a great opportunity for us. And frankly it's an important opportunity for the industry because CIOs have more choices today than they've ever had. What they need is some help to go through all of that and figure out where are the priorities for investments. Toshiya Hari Shifting gears a little bit, your Embedded business or primarily FPGA business is about 40%, 45% off the recent peak. You did, I believe, guide that business up going forward. What are you seeing from a customer order pattern perspective? You service industrial, automotive, consumer, et cetera. Are there any applications or end markets that sort of stand out from a demand standpoint? Lisa Su Yes. So again, the Embedded business is a business we don't quite talk about as often as it relates to AMD, but it's a very, very good business for us. When we look at sort of the diversity of customers and the diversity of applications, we continue to believe it's a strong pillar of our overall strategy. We are coming off the bottom. So the first quarter was the bottom for the Embedded business after there was just a lot of inventory that was gathered at end customers. We do see some improving order patterns, certainly, in the second quarter and going into the second half of the year. It's probably a little bit more gradual than everyone would like. We do see some markets better than others like aerospace and defense, very strong test and measurement, sort of emulation-related needs, a strong industrial, a little bit slower in the overall recovery. But what I'm most excited about with the Embedded business is we're starting to see some real synergies in our overall portfolio. So if you think about it, our embedded customer set that, based on FPGA is like over 6,000 plus customers, many of them had not really even understood the technology that AMD had. And what we're finding now is, especially in this world, where I said CIOs and CTOs are finding this really complex environment that they're dealing with, like they actually don't want more and more suppliers. They actually want more partners that can help them navigate the overall road map. And so we've seen very significant design win synergy between our embedded FPGA business and our embedded CPU business with design wins in the first half of the year being up sort of 40% year-on-year to over $7 billion in new design wins. And we see multiple customers saying, you know what, I want to standardize on AMD. Like I trust you guys. I trust that you'll be a good partner in all the respects. Now let's talk about how we move more and more of our portfolio. Toshiya Hari Got it. Coming back to AI, just on how you think about the portfolio and potentially M&A going forward. You've had Xilinx, Pensando, multiple software assets and now, again, hasn't closed with ZT systems. At this point, do you believe like you have the portfolio and the right assets to be very competitive or are there still holes that you feel like you need to fill? Lisa Su Yes. So we've always thought about our portfolio and our capital allocation very strategically. So these are long-term bets. From the standpoint of each of these acquisitions and our organic investments have been towards really positioning us to be a leader in high-performance computing and AI. So I think with Xilinx, Pensando, our software acquisitions and now with ZT Systems, we're extremely well positioned. And I'd like to say well positioned in the bigger AI conversation, not just sort of data center AI, but really end-to-end AI infrastructure across cloud, edge and client. And I feel really good about our portfolio. So, yes, we're in good shape. Toshiya Hari Okay. Great. The other question that we often get is on sort of the supply chain and what's going on there. Nothing specific to AMD, but I think generally speaking, things like advanced packaging and high-bandwidth memory have been fairly tight from a supply perspective in '24 and going into '25. How supply constrained are you in your Data Center business? And I know this is a tough question, but at what point do you feel like supply can potentially catch up the demand? I know it's a moving target. Lisa Su It is, Toshiya, as you said, it's a moving target. Look, I think as an industry, we've put a lot more supply capacity on board. So we've certainly ramped up our ability to service AI revenue in 2024. We will take another big step up in 2025. The constraints are like you talked about advanced packaging and some of the high bandwidth memory. I think it continues to be tight, frankly, because although we're bringing overall capacity up in the industry. Demand is also very strong. And then we find that with the new generations, die sizes are larger, the memory capacities are larger. And so all that says we're still going to be in a relatively tight supply environment going into 2025. Toshiya Hari Got it. On supply and sort of how you think about your manufacturing strategy, the other question we often get is, how should, how do you think about your foundry strategy going forward? You have a lot of concentration at TSMC and specifically in Taiwan and this certainly isn't specific to AMD. But how do you think about sort of plan B, if you will, if there is one, when you're thinking out three, four, five years down the line? Lisa Su Yes. It's clear that we all have to think about sort of resiliency in our supply chain. So COVID certainly taught us that. We continue to look at diversification of the supply chain. TSMC is a fantastic partner. I mean they have been an excellent partner to us across all of the various aspects of technology and manufacturing. We are big supporters of the CHIPS Act. We're happy that people are building the US. We're happy that TSMC is building Arizona. We're taping out products and ramping that. And we'll continue to look at how to derisk the supply chain with the notion of this is an industry-wide problem and all of us are looking at how do we create just more geographic diversity. Toshiya Hari Okay. Great. In the last two minutes, just one last question. And how we should be thinking about OpEx leverage, your investments in the near-term versus generating profits and free cash flow, if you will, for investors? Obviously, you have a rich set of opportunities, as we've sort of discussed. You do have a lot of competition with very strong companies. You're a strong company as well. How do you think about that balance, investments versus showing returns, if you will, for the investor base? Lisa Su Yes. Look, capital allocation is incredibly important for us, and we do have many more opportunities than, every year, we seem to get more. I think the key principle is we are investing in the business. I mean this is an opportunity for us. I think this AI sort of technology arc is really a once in 50 years type thing. So we have to invest. That being the case, we will be very disciplined in that investment. And so our - we expect to grow OpEx slower than we grow revenue. But we do see a huge opportunity in front of us. Toshiya Hari In the last minute or so then, we have a little bit of time, anything that perhaps we didn't touch in the session or as you have had discussions with investors and analysts as a collective unit, any aspects of AMD or your markets that we either overlook or underappreciate? Lisa Su Yes. I think the main thing is, look, this is a computing super cycle, so we should all recognize that. And there is no one player or one architecture that's going to take over. I think this is a case where having the right compute for the right workload and the right application is super important. And that's what we have been working on building over the last five-plus years is to have the best CPU, GPU, FPGA, semi-custom capability, such that we can be the best computing partner to the ecosystem. Toshiya Hari Great. Thank you so much for the time and hope to have you back next year.
[2]
Super Micro Computer, Inc. (SMCI) Goldman Sachs Communacopia + Technology Conference (Transcript)
Thank you, everybody. Welcome to the Super Micro fireside chat at the Goldman Sachs Communacopia and Technology Conference. I have the privilege of introducing David Weigand, CFO of Super Micro Computer. Prior to joining Super Micro in 2018, David has worked at Hewlett Packard Enterprise, Silicon Graphics International and Renesas Electronics America. My name is Mike Ng. I cover hardware and com tech here at the firm. Just in terms of housekeeping rules, we have about 35 minutes for today's presentation. First, thank you so much for being here, David. It's really a privilege to have you here. To start things out, maybe we can start with the bigger picture question. Supermicro clearly has had a tremendous amount of success with the class of customers, which many refer to as 22 Cloud or AI CSPs. Could you talk a little bit about what has made Supermicro a partner of choice for these customers? How does Supermicro drive differentiation relative to some of the OEM server competitors? David Weigand Okay. Thanks for having us and hosting us. So, before I begin, I wanted to say just a few points. Investors should refer to our cautionary safe harbor statement regarding risk factors and forward-looking comments on our Supermicro IR website. Also recall that we recently disclosed, we needed additional time to file our 10-K. We also said that based on what we know, we don't expect any material changes to our fourth quarter or full year fiscal year '24 results. Appreciate everyone's understanding that there's nothing more I can say on that topic at this time. We remain focused on delivering the very best products to our customers and executing on our business plans. So let me go ahead and answer your question. And the answer is that Supermicro's success has really been founded on a couple of things. One is that we have -- we build very efficient systems. We build very reliable systems. And we also are very fast to market. We are really quick to market with new technologies. And liquid cooling is really the latest example of that. We're shipping liquid cooled racks at scale. And this is at a time when the market is moving toward higher heat with GB200 with MI325x processors and GPUs that are consuming more power, generating more heat. And we're early to market. So, it's really the combination of that along with our -- what we call building block solutions, which is really our ecosystem. Those are the things that have been attractive points for those customers to come to us. And you've talked about speed to market as one of the many competitive advantages for Supermicro, could you just discuss why Supermicro has had a history of being first to market with these new technologies? What is it about the engineering or the Company's history that supported these competitive advantages? And do you worry that some of these advantages may be dampened over time just as the competition intensifies, right? Dell is somebody who comes top of mind when I think about that. David Weigand Sure. So first of all, we're vertically integrated, which means that we do -- in San Jose, we do design, we do manufacturing, we do testing, we do rack assembly, all under one roof. And that allows us to be really, really fast to market. We are a very engineering-focused company and we have a very unique way of our approach to building server technologies. And that unique approach is that we look at the server as an ecosystem kind of similar to maybe like an iPhone. We want everything in our -- in that ecosystem to work well with other components. So that way, when something new is introduced, it already fits into all of the other components. And so, we've always grown organically. That's helped us. But now to the second part of your question, Michael, how does that affect us going forward? And the answer is that there's a lot of new technologies that are ramping now. So, you've got -- you have GB200 coming out. You have Gaudi 3, you have MI325X. And so, there's a we do best in a period when there is technology disruption. And that's because, number one, we're very fast to enable those new technologies to come out with a complete platform of AMD, of NVIDIA, of Intel solutions. Number two, they're going to be high-quality solutions, and they're going to be customized. I mean, we have customers that we prepare 20 different types of servers for one customer. And so that's our specialty. Michael Ng And maybe you could expand on that a little bit. Supermicro clearly has a long history of partnership with all the silica -- semiconductor companies, all the compute suppliers. How do you think Supermicro is positioned to perform in the next wave of innovation, whether that be Blackwell or Rubin or other chips from AMD and Intel? And what does that mean for the financial profile, if anything? David Weigand Yes. We're always working on the next technology. So, we really value our relationships with AMD, Intel, NVIDIA, Broadcom and others. And so, we always want to be one step ahead, and that's -- we believe that we did that again with liquid cooling. It hurt us a little bit on the margin side as we ramped up on our shipments in the June quarter for our liquid cooled racks, but ultimately, it was the right investment because of where the industry is going. And so that's it. Michael Ng Yes. One other industry dynamic that I would like your thoughts on is just the shift to what feels like more reference design or closed system architectures. Is that a meaningful shift that you're seeing in this upcoming cycle versus prior ones? And does more of the design coming from your compute partners mean there's less value add and more potentially for somebody on the server side? David Weigand Yes. So the ref's designs are kind of -- I think they're excellent. But you have to remember that not everyone wants a red Corvette, okay? And a lot of people they want to customize their car a little bit differently. And I mentioned a minute ago, the fact that we have one customer who we build at least 20 different servers for. And that's not because they're not sure what they want. It's because they know exactly what they want because they have end customers who have different workloads. And so, their solution is built on our hardware and their software. And so, with different workloads come different requirements and also different economies of purchase. So, in other words, you don't always need a Cadillac for the job that you need to do. So, what we're expert at is bringing about the best total cost of ownership metrics. And that means you're going to get the best cost per dollar per watt, and that includes taking reference designs and customizing them to a particular customer's needs. So that's what we're -- that's what we think will happen. It already happened in DGX and HGX. We think it will happen again with other -- with future technologies. Michael Ng Great. And on the last earnings call, Supermicro provided guidance of $26 billion to $30 billion of revenue for fiscal '25. I was wondering if you could talk about the AI server demand environment that you're assuming to support that guidance. What gives you visibility and confidence into providing that outlook for fiscal '25? David Weigand When we talk to our customers about what they're doing, when we see some of the financing that's being put in place to build data centers around the world, we believe that there's still a lot of runway left and that demand continues to be very strong. So that's what gives us confidence in the overall market. Michael Ng And one thing that we all have to consider as we think about the server revenue outlook is how you're thinking about the generation of components and the timing of those components. So, what are you assuming as it relates to the Blackwell chips in that outlook? What are the potential offsets to the extent that there's more copper demand if there is some sort of slowdown on the Blackwell side? David Weigand Yes. So, we -- I think we said on our last earnings update call that we expected the first half of '25, which is the second half of our fiscal year that there would be -- that we would really start to ramp in -- with the GB200, with Blackwell. And so, the -- but in the meantime, we have really highly efficient, highly reliable H100, H200 liquid cooled racks that we can ship right now. And that we are shipping right now. So therefore, we think that there's still a lot of runway left on those products. And we've got the -- we have MI325X coming out. And so, there's still a lot of products there that we can provide that customers want. Michael Ng And we started out this fireside chat, talking a little bit about Supermicro's investments in liquid cooled servers that may have impacted margins a little bit last quarter for the guidance in the upcoming quarter. Could you talk a little bit about liquid cooling as a margin headwind and when and why does that get better? And while we're on the topic of gross margins, I think you also called out customer mix as an impact as well. So, if you could touch on that, that would be great. David Weigand Sure. So, we mentioned, again, on our conference call that we shipped about -- that we couldn't ship about 800 million to a customer that -- whose data center wasn't ready. And so those things happen, that would have been a better margin profile. So that hurt us a little bit on the margin side. In Q4, we weren't expecting that. And then also, we encountered more costs than we expected in ramping up on large-scale shipment of liquid cooled racks. So, we had more expedite costs. We had higher component costs. We had manufacturing. We were still working out some of the manufacturing efficiencies, not of liquid cooling, but of shipping at large quantities. And so that was really the challenge, but we've perfected that process now. And so we went right into Q2. And so the processes are much more efficient. Efficiency to me means a better cost profile. And so that's why we were able to give the guidance that we gave. Michael Ng That's great. And maybe just on that. So it sounds like some of the expedite costs and initial efficiencies of liquid cooling will fall off. You're probably getting a little bit of operating leverage, and it seems like the production process is better. Is that what gives you confidence around the 14% to 17% long-term gross margin outlook? Why is that the right range? And maybe you can talk about why the gross margin trajectory should get better throughout the year? David Weigand Sure. So Supermicro sits between the ODMs on the low side and the large server manufacturers on the high side. So, we sit in the middle as a producer of customized solutions. And so, we think that -- we still believe that we have the best value to our customers because of the -- again, because of that cost per watt or per dollar metric. And so that's why we set our target margins at 14% to 17%. Now when we -- in the early stages of H100 when we were one of the first to start shipping reliable, efficient H100 servers, we started ramping with great demand because we were one of the few companies that could do that. So, we were able to command higher margins than even we expected as we went over 18%. So now I think what you have is you have -- where there's a little bit of a pause here as we start -- as we await the new technologies to come out. And so that brings about a little bit more competition. But we think that in the long run, we still think 14% to 17% is a good target for us. And if you look back over history, we -- it's one that we -- that's achievable for us. Michael Ng Shifting gears and maybe talking about liquid cooled AI servers, Supermicro shipped, a bunch of liquid cool AI server racks last quarter. I don't think it's too controversial to say that it was the first to market to deliver something like that at scale. And Supermicro certainly talked about the strength in the market share there. Why is liquid cooling important even with copper-based AI servers? What portion of your sales are liquid cool today? And how big do you think that will get over time? David Weigand Yes. So we didn't announce what portion of it is, but one thing we said was that we grew from between the March quarter and the June quarter, we grew $1.5 billion. And we said that, that growth came a lot from liquid cooled racks. And the reason we think it's important even for the hopper is that we've always promoted green computing. And so, we completely believe that it makes sense to utilize less energy, less power by using liquid cooling. And by -- and we actually used highly efficient air-cooled designs to get the best performance that we could out of air cooled. And that had to do with the way we designed all of our products from the chassis up. So liquid cooling is a natural extension of that. It's trying to lower your -- lower the carbon footprint, green computing, it's been part of our DNA for a long time. We still promote that. But the market wasn't quite ready for that. And that was one of the reasons we even discounted it to try to get a foothold. And we weren't sure where all of the competition was, but we kind of found out that we were actually -- we actually had an edge, and that edge we believe, is going to help us because now where the market is headed is where you have to have liquid cooling. And so, guess who happens to be ready to ship liquid-cooled products, and others are going to have to do the same thing that we went through in terms of engineering, in terms of -- manufacturing is manufacturing and manufacturing know-how is very overlooked, but we're one of the remaining companies in the U.S. that is building things here on America, on U.S. soil because we know how to do that. And a lot of our customers like the fact that we make our products here, that we design our products here. And so, we're proud of that fact. Michael Ng And just as a follow-up on that, maybe you can talk a little bit about Supermicro's liquid cooling design advantages relative to peers. Is it more in-house design and in-house sourcing, obviously, there are a lot of liquid cooling component companies out there that may sell to some of your competitors like what's different about a Supermicro liquid cooled AI server versus someone else's? David Weigand Sure. That's a fair question. So, there's not a lot of products we could compare that with on a scale basis. But one thing I can tell you is that we put the same effort or maybe even more into our liquid cooled design as we have all of our products. And that is we design them carefully and thoughtfully to make sure that they are compatible with all of the other products in the Supermicro ecosystem. That means that we can prepare liquid cooled racks for AMD products, for Intel products, Gaudi products for -- and NVIDIA H100, H200s. So, it's really the same level of care and design. And we believe that that's going to carry -- it's going to carry us in the future. Michael Ng In response to a lot of the demand that Supermicro has been seeing, the Company has been investing in production capacity expansion, including with the new facility in Malaysia. My understanding is that production in Malaysia is supposed to come online in November of this year. Could you talk a little bit about what that ramp in production looks like? And what's the best way to think about Supermicro's capacity, whether that be in revenue terms or rack terms, whatever you think would be most helpful for the audience here? David Weigand Sure. So, we had said that we have a rack capacity of 5,000 servers per month. And then, we went on to say that 1,000 of that was -- we used to say 1,000 a month capacity per month for liquid -- of that 5,000 was liquid cooled. So now we're actually at 1,500. In July, Charles said that we were at 1,500 liquid cooled rack capacity. That doesn't mean that we're shipping that amount. That just means that's what our capacity is. So to your question, we have a site in Johor, Malaysia, which is about 25 minutes away from Singapore. And we're very excited about that site. And we believe that it dovetails nicely with the demand in Asia for AI computing, and it also gives us a chance to lower the cost envelope in a lot of areas. And so, we're -- we want to produce liquid cooled racks in Malaysia, but that's -- we're not going to start there. But that's where we eventually want to go. And we think that we think that we have -- I think we've announced that we're going to increase by our fiscal year-end, June of '25, we'll increase our liquid cooled capacity up to 3,000 racks per month. Michael Ng And just as a point of clarification, when we think about the 5,000 rack capacity and the 1,500 on liquid cool, is that including Malaysia? Or is that kind of the current today? Okay. Got it. Could you touch on the working capital requirements and whether or not Supermicro potentially needs to raise more funding to support customer demand? David Weigand Yes. So, we -- last year, we more than doubled our revenues. We went from $7.1 billion to almost $15 billion. And by the way, our profits went up 87%. So, we needed more capital clearly to take our inventory levels up to $4.4 billion at the June quarter in and take our accounts receivable up to about $2.7 billion. So, it took -- it takes -- it absorbs working capital to carry those figures. So, we did have to do some equity raises. But now we believe that we're an IG profile company. And as such, we -- our goal now is to try to utilize our balance sheet to get unsecured lines, unsecured funds, and that will help us much better than some of our historical methods of borrowing money, which were basically on asset-based lines. And so, we think that the next phase, we want to move to using debt as for working capital. And that doesn't based on growth, of course, sometimes we may have to do something different, but we're hoping to use debt to finance our growth. Michael Ng Let's talk a little bit about customer types. On the last earnings call, Supermicro had talked about being hyperscale ready. And I was wondering if you could just clarify where Supermicro's position is right now as it relates to hyperscale customers. I'll define those as the big four versus next-gen hyperscale companies, which are obviously this group of large and emerging group of CSPs. David Weigand Yes. The definition of hyperscale is definitely changing right now because it's really the companies that are providing massive amounts of data and storage to its customers with the ability to scale. And so, yes, as you mentioned, Michael, there's kind of a new -- there's an emerging group of companies that have stood up and that are building data centers first in the U.S. are now around the world, also over in Asia and in the Middle East. And in Europe, there are former Bitcoin miners and others that are standing up AI as a service, infrastructure as a service at large scale. And so -- so, yes, the definition of a hyperscaler has started to evolve a little bit because you have some companies now that are really handling some very big customers. A lot of data. And in some cases, they're taking some of the -- they're helping out with some of the capacity requirements of some of the other hyperscalers. So, they become like a -- they're kind of a quasi hyperscaler. So, then you've also got some companies with the generative AI models, the large language models, they are building, and we're shipping to some of those customers that are building the largest -- some of those customers that are building large clusters in the world. And that is going to be in providing generative AI solutions. Companies are often now doing a sandbox, enterprises are trying to build out their potential use cases for AI using a CSP. But eventually, they may decide to build their own data center. And that's what -- one of the things that Supermicro is moving towards is what we call data center building block solutions. And that is we want to help customers to design their data center properly, utilizing liquid cooling, which means that they can use less CapEx to buy less -- smaller chillers. They can also use -- they can also save money on OpEx by having less electricity requirements. And remember, if you need a 20-megawatt data center, if you are using liquid cooled racks in there, it's going to make a huge difference in your density and how many racks you can get into the data center and also in what your power requirements are. So, we want to be able to assist our companies with our data center building block solution products to help them to maximize their green computing element of data center usage. Michael Ng I think it's been well publicized and reported on that Supermicro is one of the key AI server vendors for the xAI Memphis Supercluster, 100,000 GPU cluster, was xAI the 20% customer last quarter? Anything you could share about demand trends from that customer? And how would xAI be characterized when you think about the customer segmentation that Supermicro lays out? David Weigand Yes. So, we try not to talk too much about our customers and let them speak to that. But I can say, but I think in terms of how we characterize customers, we have three verticals. We have a 5G telco edge product vertical. And then, we have an enterprise vertical. And then, we also have a large data center and OEM appliance vertical. So, the large data center OEM appliance, the OEM appliance is really like a Nutanix where they take our server, they put their software on top, and they provide a solution in their case, like a hyperconverged infrastructure or HCI solution. And we have other customers that use our servers as an OEM appliance to provide cybersecurity solutions, kind of a zero-based trust cybersecurity product. We -- and so that's a good business model for us because customers are buying from us quarter after quarter. Then we also have customers like Intel, that use -- in the enterprise vertical. They're buying from us. We designed a very unique, highly efficient server for them back in like 2016, 2017. So, they're in the enterprise vertical. And then, you've got the -- kind of let me jump back to the OEM and large data center. Then you've got customers that are just providing AI as a service. They're providing a large data center. And we also have a lot of customers in that category as well. That vertical has done very well this year. Michael Ng We just have a couple of minutes left. I was wondering if you could end with a more big picture outlook, what are Supermicro's strategic priorities in the next three to five years? Do you expect Supermicro to look more like an ODM or an OEM or somewhere in between? David Weigand Yes. So, I mean, our goals in the next five years are really to help our customers maximize their data center experience, number one. Continue to provide customized, differentiated solutions, high performance, high reliability, optimized for the customer's particular workload and application. And we -- but we want to provide end-to-end total solutions. And so, we view ourselves in a unique category of -- not we're not a CM and also, we're not yet a large server in the large server category. But at $15 billion, not doing too bad. But we really want to provide our customers with the very best product and the very best experience that they can have. And we have some of the very most -- very demanding -- we have very demanding customers, too, and we're fine with that. Michael Ng It's a wonderful place to cap off. It's been a privilege to be able to spend some time with you. Thank you so much, David.
Share
Share
Copy Link
AMD's CEO Lisa Su and Super Micro Computer's CEO Charles Liang share insights on AI acceleration, data center growth, and product strategies at the Goldman Sachs Communacopia and Technology Conference.
Advanced Micro Devices (AMD) CEO Lisa Su recently spoke at the Goldman Sachs Communacopia and Technology Conference, highlighting the company's focus on AI acceleration and data center expansion. Su emphasized AMD's commitment to developing high-performance computing solutions for the rapidly growing AI market 1.
The CEO discussed AMD's strategy to capture a significant share of the $150 billion total addressable market (TAM) for AI acceleration by 2027. Su noted that while NVIDIA currently dominates the market, AMD aims to become a major player by leveraging its strengths in CPUs, GPUs, and adaptive computing [1].
AMD is set to launch its MI300 accelerator, which combines CPU and GPU capabilities on a single chip. This product is expected to compete directly with NVIDIA's offerings in the data center and AI markets. Su expressed confidence in AMD's ability to gain market share, citing the company's track record of execution and innovation [1].
The company is also focusing on software development to support its hardware offerings, recognizing the importance of a robust software ecosystem in AI and data center applications [1].
In a separate session at the same conference, Super Micro Computer CEO Charles Liang discussed his company's role in providing infrastructure solutions for AI and data centers. Liang emphasized Super Micro's ability to deliver high-performance, energy-efficient systems tailored for AI workloads 2.
Super Micro has seen significant growth in its AI-related business, with Liang noting that AI-specific products now account for about 20% of the company's revenue. The CEO expects this percentage to increase as demand for AI infrastructure continues to rise [2].
Both AMD and Super Micro Computer highlighted the importance of collaboration within the tech industry to advance AI capabilities. Su mentioned AMD's partnerships with various cloud service providers and enterprise customers, while Liang discussed Super Micro's relationships with chip manufacturers and software developers [1][2].
The executives shared optimistic outlooks for their respective companies, citing the growing demand for AI and high-performance computing solutions. Both emphasized the need for continued innovation and investment in R&D to stay competitive in the rapidly evolving tech landscape [1][2].
The presentations by AMD and Super Micro Computer at the Goldman Sachs conference underscore the increasing investor interest in AI-related technologies. As these companies position themselves to capture larger shares of the AI and data center markets, investors are closely watching their strategies and product roadmaps [1][2].
The competition between AMD and NVIDIA in the AI acceleration space is expected to intensify, potentially leading to more innovation and better products for end-users. Meanwhile, companies like Super Micro Computer are poised to benefit from the overall growth in AI infrastructure demand, regardless of which chip manufacturer gains the upper hand [1][2].
Seagate Technology and Dell Technologies executives share insights on AI, data storage, and market dynamics at Citi's 2024 Global TMT Conference. Both companies highlight the impact of AI on their businesses and future strategies.
2 Sources
Cardinal Health, IBM, HPE, UiPath, and Fiverr executives discuss company strategies, market trends, and future outlooks at recent industry conferences.
9 Sources
ServiceNow sets ambitious growth targets, Digital Realty Trust addresses data center demand, and Dell Technologies highlights AI and as-a-service trends in recent conference calls.
6 Sources
eGain, Lantronix, and Yext release their latest quarterly earnings reports, showcasing varying results in revenue, AI initiatives, and market performance. This summary provides an overview of each company's financial performance and strategic developments.
8 Sources
A comprehensive look at the Q2 2024 earnings reports of Ichor Holdings, Adeia Inc., Veeco Instruments, Valens Semiconductor, and Icahn Enterprises, revealing diverse performances across the tech and investment sectors.
9 Sources