The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On September 12, 2024
6 Sources
[1]
Earnings call: ServiceNow targets becoming top enterprise software firm by 2030 By Investing.com
ServiceNow (NYSE: NYSE:NOW), during its recent earnings call, outlined an ambitious goal to become the most valuable enterprise software company by 2030, with CEO William McDermott highlighting the company's innovation and culture as key drivers. McDermott emphasized the success of the Xanadu release and the introduction of RaptorDB, along with ServiceNow's commitment to no layoffs during economic downturns. The company's focus on enhancing productivity through AI, without reducing headcount, was also discussed. ServiceNow's strong performance in customer service management, with revenues surpassing $1 billion, and its dominance in the financial services sector, serving all top 24 global banks, were notable achievements. The call underscored the potential of AI to impact the economy positively and ServiceNow's role in driving digital transformation across industries. ServiceNow, under the leadership of CEO William McDermott, is charting a course to dominate the enterprise software space by the end of the next decade. The company's recent innovations and strategic partnerships are setting the stage for significant growth and transformation in the industry. With a focus on AI and a commitment to maintaining a robust workforce, ServiceNow is positioning itself as a leader in the digital transformation era. ServiceNow's ambitious growth targets are supported by its strong financial performance and market position. The company's impressive gross profit margin of 79.07% for the last twelve months as of Q2 2024 underscores its operational efficiency and ability to generate substantial earnings from its revenues. This is a testament to the company's innovation and culture, as highlighted by CEO William McDermott. The company's valuation metrics, however, indicate a premium market perception. With a P/E ratio of 157.33 and a Price / Book multiple of 20.85, ServiceNow is trading at a high earnings multiple and a high valuation relative to its book value. This reflects investor confidence in the company's future growth prospects, aligning with the CEO's vision for ServiceNow to become the most valuable enterprise software company by 2030. InvestingPro Tips suggest that ServiceNow is a prominent player in the Software industry and that its cash flows can sufficiently cover interest payments, showcasing a healthy financial structure. Moreover, the company is operating with a moderate level of debt, which may provide flexibility for continued investment in AI and other innovations. InvestingPro also lists 15 additional tips for ServiceNow, providing a comprehensive analysis for investors looking to delve deeper into the company's financial health and market potential. Investors may want to keep an eye on the company's next earnings date on October 23, 2024, to monitor ServiceNow's ongoing performance and strategic initiatives in the dynamic enterprise software market. Kasthuri Rangan: All right. All I ask is, are we ready for GenAI to move to the application layer in the bottom of the NASDAQ, I don't know, 2002, September, October, I forget. NASDAQ thousand stocks that did 8x multiples, not on revenue and earnings. The world feels like it's coming to a crashing end, and I'm depressed. Will I have a job? Will I be -- will software be around? I meet this gentleman, McDermott. 30-minute meeting, I come out feeling great about the company that you just joined, feeling great about the industry, feeling great about my job. I come back energized. Wow. And it's been, what, 22 years? Kasthuri Rangan: You've had a profound influence on the industry. Thank you for everything that you've done for software. Thank you, SaaS, and the things that you're going to be doing in the future. And thanks, you've been a big source of personal inspiration. Thank you so much. William McDermott: Thank you. That's sweet It's really sweet. Thank you so much. Q - Kasthuri Rangan: Now with that [indiscernible], people ask me, why is Bill always bullish? And what are you always bullish? William McDermott: I'm the guy that wrote the book, "When his dream." And in the opening quotes, by Robert Kennedy, and he said, "Some men's seeing as they are and say why. I dream things that never were and say, why not?" And that's who I am. And when I think about my life's journey, I just keep getting lucky by working for great brands that have unlimited potential. And if you lead people with innovation, customer-centricity and absolute unbreakable will and the passion to win every single day, it's amazing what can be accomplished when enough people care. And that's pretty much the story. Kasthuri Rangan: And you asked me -- I will not mention the name of the stock, but 2007, June 29, you said, "What stock should I buy?" And like I was blown away by the trust you had me. I mentioned the name of a company that had a fruit thing in -- they were releasing the first smartphone or something. And you said, I bought it. I said, "God, he did it. He did it." Now I got to take everything that I recommend seriously. So tell us where do you want ServiceNow to be in 5 years? What are your long-term aspirations for the company? William McDermott: I always focus, especially in technology, on the innovation of a company. Yesterday, as you know, we had an unbelievable Xanadu release 350 million net new innovations built into our gen AI release, representing 5 million hours of engineering excellence into that one release. So innovation is at the center of everything. If there's no inhibition, there is no hope in tech. We also focus on the culture. I think that it's easy to talk about culture when things are going great. When things aren't going great is when cultures are built. And as you know, a few years ago, things were pretty questionable out there, and companies were laying people off like it was a sport. And we took a very distinct position on that and said, "We will not lay off anybody at ServiceNow." And while there might be some uncertainty in the horizon, once we get cleared, we're going to need all these great people because we only hired 10s and maybe 9s, but definitely not 8s and 7s. So we're going to need them all. And I think really making ServiceNow the best-run company in the information technology industry, where we can say with pride, we take the innovation, we take the culture and we take the way we lead the company, no matter what your position is so seriously because we're all working for the customer and their customers and they're going to need us now more than ever. Every day, they need us now more than ever. And we just started to bring that sense of urgency, and we will what we want. And we want to be the best, and we will it every day, and that's what I want. I want ServiceNow to be the best-run business in the information technology industry. And from a shareholder perspective, I do have goals. And by 2030, for sure, I see ServiceNow, with the exception of the hyperscalers, as the most valuable enterprise software company in the world. William McDermott: That's what I believe is going to happen, and I don't have a doubt in my mind about it. Kasthuri Rangan: You welcome back to the intervening years of this conference, and we're going to see 2029 when you're back. You guys [indiscernible], okay. I'll remember this. Kasthuri Rangan: Thank you. Yes. Great. Bill, you talk to customers all the time. What is -- what your feel for the pulse of what they're thinking about? It's already all of this year? What are they thinking about calendar '25, budget expectations, priority? William McDermott: I think they always need to grow, and that somehow seems to get lost. Most people keep talking about expenses and productivity, but in the end, they both have to run a tight shift and keep it lean and productive, but they also have to grow. And I always try to think about what we are doing as creating that single platform, that single pane of glass that resides above 55 years of complexity and a total mess. When you think about the enterprise and the average person having to go in and out of 17 different applications, it's no wonder they want to work from home. Who wants to go to that enterprise? So with ServiceNow, they go into one platform. And all that complexity disappears, and their work becomes enjoyable, becomes a pleasure. Their creative uses can flow. They can get stuff done. So I really truly believe this is a moment in time where we have to help the companies we serve be more productive. Don't waste 1/3 of the workers' time doing silly season stuff. Get that productivity. Get that revenue per employee metric going north, and make sure that you're not only making the workers stay more pleasurable, but you're getting much more output per worker and you can prove It. Now once you free that up in gen AI and the #1 use case for gen AI is process automation, then you also free yourself up to start thinking big and setting audacious goals on how you're going to grow your company, how you're going to unleash your company in new channels, in new ways, how you're going to apply the logic of process automation and data and really giving every employee and every customer an unbelievable experience on an end-to-end basis. So we came to the market with a clear vision to be the AI platform for business transformation. And that's not a department thing, that's an enterprise thing. That's in every industry that serves every persona and that crosses every geography in the global economy. That's what we're doing. And you can grow and you can be efficient and you can drive absolutely great results for your shareholders. That's what we owe every customer. Kasthuri Rangan: That's great. When you look at what's been going on with generative AI, it's largely Jensen Huang was earlier today up on stage here. He did mention your company. I don't think he mentioned any other stuff. He talked about employee workflows, customer workflows being built on ServiceNow. So that's infrastructure. A lot of the money has gone to infrastructure. When do you think GenAI moves to the application layer? William McDermott: Well, first, I want to knowledge, a great leader, visionary like Jensen. When I first came into ServiceNow, believe it or not, it's over 5 years now, the first thing we did, which wasn't even picked up by anybody because we've done this organically, we're the only enterprise software company in the world that crossed $10 billion in ACV, and we didn't have to buy a dimer revenue to do it. Didn't lay anybody off and built our own pathway to the top by innovating great product after product after product. Now when I first came in, we picked up a company called Element AI in Montreal, Canada because we had an immediate vision for AI. And the Touring award-winning [indiscernible] had great researchers and great engineers, but no go-to market. A lot of people forget the go-to market. It's not a good idea to forget that. And by putting it together with ServiceNow, we were able to recode Element AI into the ServiceNow platform, and we've actually been building large language models with Jensen and his great NVIDIA company for that entire time. So our paint isn't wet. We're not a pretender. We actually have real product, and that's what Jensen reacted to and why he might have mentioned us on stage. He calls us the operating system for the enterprise. And he, along with me, believes that this platform that ServiceNow has can manifest itself in helping companies manage their processes at a record clip in terms of speed. We just announced RaptorDB. We're in the database business, by the way. We now have a data workflow platform that we think is not only state-of-the-art, but we already know it's the fastest database in the world and it can do analytic queries faster and better than any database in the world. And we know that as the workflow automates the process, we now have the ability to look at the data from any data source, whether it's a system of record or it's a hyperscaler or it's a lake, we can grab the data as part of the transaction in the workflow. And you can have a complete knowledge graph now of your enterprise, whether it's people, process, data or device on one platform with one clear pane of glass. And you can go into an integration hub and you can see all the systems that you have in the enterprise, all the devices you have in the enterprise. In fact, yesterday, we ran one little query with 1.3 billion different devices in one of the largest companies in the world, sub-second speed. And we studied that, made some assessment on that and can help customers navigate that kind of a complex query instantaneously. And every persona now can simply have a graph and a dashboard, activated in real time on their device or on their desktop. No problem. We do it, and only ServiceNow does it. So this differentiation of innovation is what it's all about. Now as it relates to NVIDIA and hugging face and building things that have never been built before, we decided to go for domain-specific LLMs. And this is also multimodal as you know. But the idea there was to build our own, to literally automate the way IT and the whole estate around IT is run to completely reinvent the employee experience. Who cares about payroll. It's done. But how do I recruit higher onboard train, certify, give you your learning journey, so you're the best in the world at what you do, give you all the services so you can see whatever you need to see, your comp plan, what's happening with my long-term retirement procedures? "Hey, how am I doing on the health care? I have to take a leave, I have a maternity issue." Whatever it is, we do it for you. And you have a great experience still on the mobile. And by the way, there's no call in the 800 number, you have it all. And then when you offboard, we handle that, too. A lot of people have employees that have left their company years ago still on their data. That's a beauty. And then there's the customer and how you service the customer. And a lot of people don't know this, but when you think about customer service management, we blew past 1 billion. And when you think about field service, customer service, you think about the integration of ServiceNow Now Assist with Copilot from Microsoft. And just picture yourself, even Teams, and you want to activate an action in ServiceNow. It's seamless. You don't have to jump in and out of some app or UI, it's already done at a deep engineering level. And so you say, "Well, how is that an advantage in customer service management?" Well, if I happen to be in ServiceNow customer service management and I want to completely compose a presentation and perhaps it's a PowerPoint on the slide to make a sale to Goldman Sachs (NYSE:GS), I can do that instantaneously. I don't have to go into a separate app. It's all engineered together. William McDermott: I would very much like that. Thank you, Goldman Sachs. You're actually an excellent customer. As is, 24 out of the top 24 banks in the world. We were 23 out of 24 coming into this year. We handled that situation in the first quarter. We're now 24 out of 24. We love financial services. In fact, our new Gen AI release yesterday, financial services, retail, telco, media, technology, public sector, all of these industries now are getting Gen AI use cases out of the box with the best platform in the world and it's real product. And the other thing we should talk about is creator. A lot of people don't know this, but Creator Now blew past 1 billion last year. And so you say, "Wow, you mean you actually have developers now creating innovation and applications on the fly?" Yes. But the best part is with complex workflows and governance procedures, and the development of new applicants, you're doing it on that same platform. So now your CEO knows when you're an engineer that anything you're building is in concert with the business processes and the way the workflow should be automated. And it's not a sidecar project, like the thousands and thousands points of dim light point solutions that are killing companies today, costing them all kinds of money, but dragging their productivity through the ground. All that goes away with ServiceNow. So the word is getting out there. And as the word gets out there, the company just continues to prosper. And that's the most important thing, getting the word out there. Kasthuri Rangan: Bill, what's your vision for Now Assist? In terms of net new ACV, there was a big increase in second quarter. Where do you see this going? William McDermott: Well, quarter-over-quarter, sequentially, we look at this doubling, and it has been doubling and it's going to continue to double because we have a real product. And I think the idea of Now Assist and the GenAI revolution, just think about it, $3 trillion will be spent on AI by enterprises through 2027. 36% of that will be GenAI in the space that we're participating in. So we're fighting for a $1-plus trillion price. So to your point, when do we move past hardware and GPU stacks and get into the app layer or the platform layer? It's now. It has started with ServiceNow. And I think ServiceNow, from what I can tell, you have great companies like Microsoft and NVIDIA that have prospered mightily because of... William McDermott: Yes. And look, I think that we have built a framework partnership with Microsoft that's unbreakable. We have great partnerships with all the hyperscalers, though, it should be mentioned. I do need to do that because we work with AWS. We also have a partnership now with Google (NASDAQ:GOOGL) Cloud. And it's important to give the customer choice. And so as you're talking about GenAI, just remember, our domain-specific models, they're not only products, but they run very inexpensively. Customers love that. So just think about 175 GPU versus 1 GPU in terms of the cost to run a domain-specific LLM versus a big model. Think about 0 latency, rocket fast, it's your data and think about total security. Because, again, you're not going on any wild adventures. We're working within the framework of corporation. Customers love that. So that's where it's at now. But there's lots of different innovation. Met is doing a great job. Microsoft is doing a great job. Google is doing a great job, and we integrate seamlessly with all those models. In fact, lots of innovation is going to take place where companies will build their own, and they can bring their own LLM and integrate that into ServiceNow. So this is infinite choice, no limitations, complete open architecture and yet at the same time, very disciplined workflow automation and security across the whole enterprise. Even on security, security companies out there are very good. But some might be good in the cloud, some might be good at a firewall, others might be good at endpoint. We integrate with all of them. And then you can have a complete visibility on how they're all doing at different pieces and parts of the company in real time. So I think this idea of a controlled tower for digital transformation within every company is our calling card, and it's a good one because I don't compete with anybody. I just do something that none of them do. Kasthuri Rangan: I get pushed back from investors. And hey, the gen AI thing, is it really a business use case? How are customers getting value out of it? I know you've deployed it internally. Can you drill into one or two customers that have deployed GenAI with ServiceNow have gotten meaningful savings that are impacting their bottom line? William McDermott: There's so many. Teleperformance would be an example. USI would be an example, and it's instantaneous ROI, by the way. So this isn't like a hard business case. Let's get the consulting team over there for a month and prove it. This is like -- it's a no-brainer. You're deflecting 90% of the cases that normally had to be managed by agents. So okay, now it's self-service. Just think about that savings. When the case does need to be managed, just think about resolving it in seconds instead of 45 minutes, and then times that by the number of cases. Think about a BT (LON:BT). Think about a London Stock Exchange. Think about creating experiences for the customer that are so uncommonly good that they're undeniable. So I believe that the biggest cases right now would be in managing a case, deflecting a case, closing out a case and making sure that it's an employee experience or customer experience, the time to resolution is so undeniably good that you don't need a computer to understand the business case. You could do it on the back of a napkin. And that's the kind of cases we're going into the market with, and that's why customers absolutely love it. And again, you're not doubling sequentially because you don't have product. That's the key. You've got to have the product. Kasthuri Rangan: Some would say it's build salesmanship, but there's got to be excellent product? William McDermott: There's got to be more than just bill, I mean, because it's a big global economy out there. And if you're the Australia government or you're the Italian government or you're the Saudi Arabian government and the Kingdom and you're buying into ServiceNow, I can't be in all those 3 places at the same time. And if you're looking at the top 2,000 companies in the world, we're getting close to having 90% of them doing something with ServiceNow. And what we think has really, really added a lot of value is now where the CEO's platform. And I think we earned our stripes with the CIO because of the outstanding platform and the notoriety it had in IT, which then evolved into the employee experience, obviously, into the customer experience and to the developer community. But think about the CEOs out there. If you believe -- and you should ask them, the CEOs out there have had it with company spending all kinds of money on IT, all kinds of money on applications. And in the end, they look around the boardroom and they say, "Why is it that this department can't talk to this department? Why is it that I can have a single threaded end-to-end process in this company that takes my innovation from the foxhole and can move it?" Okay -- excuse me, from the factory and move it to the foxhole in instantaneous time. Why can't we do that? Well, this system doesn't talk to that system. That data is caught in that volt, then we can't get to it. And it goes on and on and on. And that's not only in the private sector. Obviously, that's in the public sector, which explains why our federal, state and local business, particularly in the United States, blew way past 1 billion because they love us, and it works. And the projects are quick to implement, fast to return value on invested capital. And everybody that does business with us is referenceable and successful. And especially in public sector, they don't want to take chances with these huge programs that used to cost billions and billions. And now instead, they can do it for millions and install it, get it referenceable and make it a showcase in weeks, sometimes months or not years. Kasthuri Rangan: Bill, you talked about the savings from call deflection and whatnot. As you talk to your customers, getting these savings from generative AI, are they saying, "Okay, I don't need to hire as many people because this thing is saving me money." Or are they saying, "I don't want to plant the other thing." but how are they looking at it? Are they looking to cut costs, cut down employment? And you have broader touch therefore, what does this mean for the employment market in customer support and sales and marketing? Is AI going to take away our jobs? William McDermott: I'm going to give you two ways of looking at it. So first, let's talk about the CEO. There's this CEO, a really great conversation. He has done roll-ups and dominated in the insurance industry. I asked him the same question. Dominated. Incredible. And he told me, that he doesn't want his people to have a great experience, but he wants to do so much more with less. Meaning in the event the revenue doesn't go into a hockey stick, he's got to get the productivity out of the headcount that he already has. And he needs technology to bring that value out of each and every employee that he has in the company. So there's definitely a focus on the productivity. On the other hand, he is going to the M&A card and growing very fast because of the M&A, and he's acutely aware of the fact that the value of this company is based on how fast it grows. So he wants to use technology to move the top line. It's not about reducing headcount, it's about leveraging the headcount you already have. In tech, there's many more jobs available than people to fill them. Nobody ever talks about that. Millions of jobs available that aren't getting filled. Yes. And so it's a renaissance of technology that's going on in the world because every company needs to be a tech company. In fact, I think every company needs to be a software company, which is why I think the ServiceNow platform needs to be a standard in every company in the world. The other thing about what people don't know, they're genuinely afraid of. So if you think about Time Magazine in 1966, they ran a big story that computers were going to take away 90% of the jobs in the global economy. That the only people that would actually be employed were high-level managers, and the governments would have to subsidize the other 90%. Jobs would be gone and society would have to carry him because computers were going to do everything. Now I think there's like hundreds of millions of jobs that have been created on tech since then, and obviously, that never came to fruition. I believe, and I sincerely believe this that AI is the well spring of opportunity in the global economy. There are researchers that independently have said it will have an $11 trillion impact on the economy in the next handful of years. I believe that maybe true. Maybe it's 10, maybe it's 9, maybe it's 8, but it's going to be big. And the reason for that is there is so much inefficiency. There is so much waste. There is so much human potential that can be activated by taking the sole crushing work away from people and unleashing them to do things that really matter that can help companies grow and prosper. And that has never been factored into the equation as people think about technology on a day-to-day basis, that's why we're working so hard to tell them the story. Kasthuri Rangan: That's great. Powerful. Tell us about your partnership with Microsoft, the Now Assist working with Office Copilot? And also, if you can, how is the Microsoft Copilot thing going for ServiceNow internally? William McDermott: Yes. We have a great partnership with Microsoft, as I mentioned earlier. If you think about what I said, Copilot and Now Assist are 2 excellent, excellent Gen AI use cases for customers to consider, but they do very different things. And if you think about knowledge work and Office 365 and Teams and Dynamics, by the way, we use Dynamics at ServiceNow. These are all very important things, but they don't do what we do. So let me just give you an example. If you're in Teams and you're doing your day-to-day job. And let's just say, for example, you happen to be in the HR or the technology department and you need to order some new gear for a home office or get a computer delivered to your office because the one that you're using is broken, Microsoft can do a lot of things, but they can't do that. So Now Assist will be assisting you in getting you the form factor configuration that management signed off on, the price that you agreed to in your procurement schedule, and it will go grab what you need, in the form that you needed, at the price you bargained for and deliver it to the address you wanted, and it will be done automatically, instantaneously. And the thing that's going to really be interesting, so you could see how that's going to work for you. But then it'd be really interesting, are these AI agents? Hang with me on this. We see an enormous world of AI agents. Now the thing that's going to happen in the enterprise with AI agents is they are going to work 24 hours a day, 7 days a week, 365 days a year. We don't have to get them on any kind of health care plan. We don't really worry about them. We're just going to grind and grind and grind, and they're going to do a lot of the hard stuff. But they're going to be working with people, not just for people, and they're going to be able to think, not just act. They're going to be pretty smart, maybe even as smart, if not smarter than humans. Certainly, in certain ways they will. And this is happening now. You saw the Xanadu release. If you haven't, we got AI agents. Now this is our competitive advantage. Others will have them too, but they will have them in their main. They'll be very specialized and they'll do specialized things, which is fine. Let it happen. But what platform on an [indiscernible] is going to corral all these agents that are running around? Where you have some real [indiscernible] with your strategy on an end-to-end basis, where the finance agents work well with sales agents who understand the engineering agents. And now you've got an organization that's under control, and you have processes that are integrated and the agents are plus up and they're secure. What happens if the agents aren't governed? And this isn't a platform structure around them. You might optimize for one department, but you're not going to optimize the entity, the corporation. And that's what's happened for the last 55 years. Everybody gets to do their own thing, they self-optimize for their department, and the CEO sits back and wonders why I can execute a clear vision across the enterprise on an end-to-end basis with immediacy. Well, everything is trapped in a silo. We took down the walls. And now you can have your agents. We're not competing with other people's agents. That's good, but we'll integrate those agents into workflow automation across the enterprise in a coherent way, so you run a great company. That's going to be the big frontier. Kasthuri Rangan: Next year, when you come back, everybody will have an agent. Kasthuri Rangan: You can actually ask the agent to take notes and you can actually listen and enjoy the conversation. So on that note, Bill, it's great to have you with us.
[2]
ServiceNow, Inc. (NOW) Goldman Sachs Communacopia + Technology Conference Call Transcript
All right. All I ask is, are we ready for GenAI to move to the application layer in the bottom of the NASDAQ, I don't know, 2002, September, October, I forget. NASDAQ thousand stocks that did 8x multiples, not on revenue and earnings. The world feels like it's coming to a crashing end, and I'm depressed. Will I have a job? Will I be -- will software be around? I meet this gentleman, McDermott. 30-minute meeting, I come out feeling great about the company that you just joined, feeling great about the industry, feeling great about my job. I come back energized. Wow. And it's been, what, 22 years? You've had a profound influence on the industry. Thank you for everything that you've done for software. Thank you, SaaS, and the things that you're going to be doing in the future. And thanks, you've been a big source of personal inspiration. Thank you so much. William McDermott Thank you so much. I appreciate you, brother. Thank you. Thank you. That's sweet It's really sweet. Thank you so much. Now with that [indiscernible], people ask me, why is Bill always bullish? And what are you always bullish? I'm the guy that wrote the book, "When his dream." And in the opening quotes, by Robert Kennedy, and he said, "Some men's seeing as they are and say why. I dream things that never were and say, why not?" And that's who I am. And when I think about my life's journey, I just keep getting lucky by working for great brands that have unlimited potential. And if you lead people with innovation, customer-centricity and absolute unbreakable will and the passion to win every single day, it's amazing what can be accomplished when enough people care. And that's pretty much the story. And you asked me -- I will not mention the name of the stock, but 2007, June 29, you said, "What stock should I buy?" And like I was blown away by the trust you had me. I mentioned the name of a company that had a fruit thing in -- they were releasing the first smartphone or something. And you said, I bought it. I said, "God, he did it. He did it." Now I got to take everything that I recommend seriously. So tell us where do you want ServiceNow to be in 5 years? What are your long-term aspirations for the company? William McDermott I always focus, especially in technology, on the innovation of a company. Yesterday, as you know, we had an unbelievable Xanadu release 350 million net new innovations built into our gen AI release, representing 5 million hours of engineering excellence into that one release. So innovation is at the center of everything. If there's no inhibition, there is no hope in tech. We also focus on the culture. I think that it's easy to talk about culture when things are going great. When things aren't going great is when cultures are built. And as you know, a few years ago, things were pretty questionable out there, and companies were laying people off like it was a sport. And we took a very distinct position on that and said, "We will not lay off anybody at ServiceNow." And while there might be some uncertainty in the horizon, once we get cleared, we're going to need all these great people because we only hired 10s and maybe 9s, but definitely not 8s and 7s. So we're going to need them all. And I think really making ServiceNow the best-run company in the information technology industry, where we can say with pride, we take the innovation, we take the culture and we take the way we lead the company, no matter what your position is so seriously because we're all working for the customer and their customers and they're going to need us now more than ever. Every day, they need us now more than ever. And we just started to bring that sense of urgency, and we will what we want. And we want to be the best, and we will it every day, and that's what I want. I want ServiceNow to be the best-run business in the information technology industry. And from a shareholder perspective, I do have goals. And by 2030, for sure, I see ServiceNow, with the exception of the hyperscalers, as the most valuable enterprise software company in the world. That's what I believe is going to happen, and I don't have a doubt in my mind about it. Kasthuri Rangan You welcome back to the intervening years of this conference, and we're going to see 2029 when you're back. You guys [indiscernible], okay. I'll remember this. Thank you. Yes. Great. Bill, you talk to customers all the time. What is -- what your feel for the pulse of what they're thinking about? It's already all of this year? What are they thinking about calendar '25, budget expectations, priority? William McDermott I think they always need to grow, and that somehow seems to get lost. Most people keep talking about expenses and productivity, but in the end, they both have to run a tight shift and keep it lean and productive, but they also have to grow. And I always try to think about what we are doing as creating that single platform, that single pane of glass that resides above 55 years of complexity and a total mess. When you think about the enterprise and the average person having to go in and out of 17 different applications, it's no wonder they want to work from home. Who wants to go to that enterprise? So with ServiceNow, they go into one platform. And all that complexity disappears, and their work becomes enjoyable, becomes a pleasure. Their creative uses can flow. They can get stuff done. So I really truly believe this is a moment in time where we have to help the companies we serve be more productive. Don't waste 1/3 of the workers' time doing silly season stuff. Get that productivity. Get that revenue per employee metric going north, and make sure that you're not only making the workers stay more pleasurable, but you're getting much more output per worker and you can prove It. Now once you free that up in gen AI and the #1 use case for gen AI is process automation, then you also free yourself up to start thinking big and setting audacious goals on how you're going to grow your company, how you're going to unleash your company in new channels, in new ways, how you're going to apply the logic of process automation and data and really giving every employee and every customer an unbelievable experience on an end-to-end basis. So we came to the market with a clear vision to be the AI platform for business transformation. And that's not a department thing, that's an enterprise thing. That's in every industry that serves every persona and that crosses every geography in the global economy. That's what we're doing. And you can grow and you can be efficient and you can drive absolutely great results for your shareholders. That's what we owe every customer. Kasthuri Rangan That's great. When you look at what's been going on with generative AI, it's largely Jensen Huang was earlier today up on stage here. He did mention your company. I don't think he mentioned any other stuff. He talked about employee workflows, customer workflows being built on ServiceNow. So that's infrastructure. A lot of the money has gone to infrastructure. When do you think GenAI moves to the application layer? William McDermott Well, first, I want to knowledge, a great leader, visionary like Jensen. When I first came into ServiceNow, believe it or not, it's over 5 years now, the first thing we did, which wasn't even picked up by anybody because we've done this organically, we're the only enterprise software company in the world that crossed $10 billion in ACV, and we didn't have to buy a dimer revenue to do it. Didn't lay anybody off and built our own pathway to the top by innovating great product after product after product. Now when I first came in, we picked up a company called Element AI in Montreal, Canada because we had an immediate vision for AI. And the Touring award-winning [indiscernible] had great researchers and great engineers, but no go-to market. A lot of people forget the go-to market. It's not a good idea to forget that. And by putting it together with ServiceNow, we were able to recode Element AI into the ServiceNow platform, and we've actually been building large language models with Jensen and his great NVIDIA company for that entire time. So our paint isn't wet. We're not a pretender. We actually have real product, and that's what Jensen reacted to and why he might have mentioned us on stage. He calls us the operating system for the enterprise. And he, along with me, believes that this platform that ServiceNow has can manifest itself in helping companies manage their processes at a record clip in terms of speed. We just announced RaptorDB. We're in the database business, by the way. We now have a data workflow platform that we think is not only state-of-the-art, but we already know it's the fastest database in the world and it can do analytic queries faster and better than any database in the world. And we know that as the workflow automates the process, we now have the ability to look at the data from any data source, whether it's a system of record or it's a hyperscaler or it's a lake, we can grab the data as part of the transaction in the workflow. And you can have a complete knowledge graph now of your enterprise, whether it's people, process, data or device on one platform with one clear pane of glass. And you can go into an integration hub and you can see all the systems that you have in the enterprise, all the devices you have in the enterprise. In fact, yesterday, we ran one little query with 1.3 billion different devices in one of the largest companies in the world, sub-second speed. And we studied that, made some assessment on that and can help customers navigate that kind of a complex query instantaneously. And every persona now can simply have a graph and a dashboard, activated in real time on their device or on their desktop. No problem. We do it, and only ServiceNow does it. So this differentiation of innovation is what it's all about. Now as it relates to NVIDIA and hugging face and building things that have never been built before, we decided to go for domain-specific LLMs. And this is also multimodal as you know. But the idea there was to build our own, to literally automate the way IT and the whole estate around IT is run to completely reinvent the employee experience. Who cares about payroll. It's done. But how do I recruit higher onboard train, certify, give you your learning journey, so you're the best in the world at what you do, give you all the services so you can see whatever you need to see, your comp plan, what's happening with my long-term retirement procedures? "Hey, how am I doing on the health care? I have to take a leave, I have a maternity issue." Whatever it is, we do it for you. And you have a great experience still on the mobile. And by the way, there's no call in the 800 number, you have it all. And then when you offboard, we handle that, too. A lot of people have employees that have left their company years ago still on their data. That's a beauty. And then there's the customer and how you service the customer. And a lot of people don't know this, but when you think about customer service management, we blew past 1 billion. And when you think about field service, customer service, you think about the integration of ServiceNow Now Assist with Copilot from Microsoft. And just picture yourself, even Teams, and you want to activate an action in ServiceNow. It's seamless. You don't have to jump in and out of some app or UI, it's already done at a deep engineering level. And so you say, "Well, how is that an advantage in customer service management?" Well, if I happen to be in ServiceNow customer service management and I want to completely compose a presentation and perhaps it's a PowerPoint on the slide to make a sale to Goldman Sachs, I can do that instantaneously. I don't have to go into a separate app. It's all engineered together. Kasthuri Rangan So if you want to make a sale to Gold Sachs, our CEO is actually here. William McDermott I would very much like that. Thank you, Goldman Sachs. You're actually an excellent customer. As is, 24 out of the top 24 banks in the world. We were 23 out of 24 coming into this year. We handled that situation in the first quarter. We're now 24 out of 24. We love financial services. In fact, our new Gen AI release yesterday, financial services, retail, telco, media, technology, public sector, all of these industries now are getting Gen AI use cases out of the box with the best platform in the world and it's real product. And the other thing we should talk about is creator. A lot of people don't know this, but Creator Now blew past 1 billion last year. And so you say, "Wow, you mean you actually have developers now creating innovation and applications on the fly?" Yes. But the best part is with complex workflows and governance procedures, and the development of new applicants, you're doing it on that same platform. So now your CEO knows when you're an engineer that anything you're building is in concert with the business processes and the way the workflow should be automated. And it's not a sidecar project, like the thousands and thousands points of dim light point solutions that are killing companies today, costing them all kinds of money, but dragging their productivity through the ground. All that goes away with ServiceNow. So the word is getting out there. And as the word gets out there, the company just continues to prosper. And that's the most important thing, getting the word out there. Kasthuri Rangan Bill, what's your vision for Now Assist? In terms of net new ACV, there was a big increase in second quarter. Where do you see this going? William McDermott Well, quarter-over-quarter, sequentially, we look at this doubling, and it has been doubling and it's going to continue to double because we have a real product. And I think the idea of Now Assist and the GenAI revolution, just think about it, $3 trillion will be spent on AI by enterprises through 2027. 36% of that will be GenAI in the space that we're participating in. So we're fighting for a $1-plus trillion price. So to your point, when do we move past hardware and GPU stacks and get into the app layer or the platform layer? It's now. It has started with ServiceNow. And I think ServiceNow, from what I can tell, you have great companies like Microsoft and NVIDIA that have prospered mightily because of... Yes. And look, I think that we have built a framework partnership with Microsoft that's unbreakable. We have great partnerships with all the hyperscalers, though, it should be mentioned. I do need to do that because we work with AWS. We also have a partnership now with Google Cloud. And it's important to give the customer choice. And so as you're talking about GenAI, just remember, our domain-specific models, they're not only products, but they run very inexpensively. Customers love that. So just think about 175 GPU versus 1 GPU in terms of the cost to run a domain-specific LLM versus a big model. Think about 0 latency, rocket fast, it's your data and think about total security. Because, again, you're not going on any wild adventures. We're working within the framework of corporation. Customers love that. So that's where it's at now. But there's lots of different innovation. Met is doing a great job. Microsoft is doing a great job. Google is doing a great job, and we integrate seamlessly with all those models. In fact, lots of innovation is going to take place where companies will build their own, and they can bring their own LLM and integrate that into ServiceNow. So this is infinite choice, no limitations, complete open architecture and yet at the same time, very disciplined workflow automation and security across the whole enterprise. Even on security, security companies out there are very good. But some might be good in the cloud, some might be good at a firewall, others might be good at endpoint. We integrate with all of them. And then you can have a complete visibility on how they're all doing at different pieces and parts of the company in real time. So I think this idea of a controlled tower for digital transformation within every company is our calling card, and it's a good one because I don't compete with anybody. I just do something that none of them do. I get pushed back from investors. And hey, the gen AI thing, is it really a business use case? How are customers getting value out of it? I know you've deployed it internally. Can you drill into one or two customers that have deployed GenAI with ServiceNow have gotten meaningful savings that are impacting their bottom line? William McDermott There's so many. Teleperformance would be an example. USI would be an example, and it's instantaneous ROI, by the way. So this isn't like a hard business case. Let's get the consulting team over there for a month and prove it. This is like -- it's a no-brainer. You're deflecting 90% of the cases that normally had to be managed by agents. So okay, now it's self-service. Just think about that savings. When the case does need to be managed, just think about resolving it in seconds instead of 45 minutes, and then times that by the number of cases. Think about a BT. Think about a London Stock Exchange. Think about creating experiences for the customer that are so uncommonly good that they're undeniable. So I believe that the biggest cases right now would be in managing a case, deflecting a case, closing out a case and making sure that it's an employee experience or customer experience, the time to resolution is so undeniably good that you don't need a computer to understand the business case. You could do it on the back of a napkin. And that's the kind of cases we're going into the market with, and that's why customers absolutely love it. And again, you're not doubling sequentially because you don't have product. That's the key. You've got to have the product. Kasthuri Rangan Some would say it's build salesmanship, but there's got to be excellent product? William McDermott There's got to be more than just bill, I mean, because it's a big global economy out there. And if you're the Australia government or you're the Italian government or you're the Saudi Arabian government and the Kingdom and you're buying into ServiceNow, I can't be in all those 3 places at the same time. And if you're looking at the top 2,000 companies in the world, we're getting close to having 90% of them doing something with ServiceNow. And what we think has really, really added a lot of value is now where the CEO's platform. And I think we earned our stripes with the CIO because of the outstanding platform and the notoriety it had in IT, which then evolved into the employee experience, obviously, into the customer experience and to the developer community. But think about the CEOs out there. If you believe -- and you should ask them, the CEOs out there have had it with company spending all kinds of money on IT, all kinds of money on applications. And in the end, they look around the boardroom and they say, "Why is it that this department can't talk to this department? Why is it that I can have a single threaded end-to-end process in this company that takes my innovation from the foxhole and can move it?" Okay -- excuse me, from the factory and move it to the foxhole in instantaneous time. Why can't we do that? Well, this system doesn't talk to that system. That data is caught in that volt, then we can't get to it. And it goes on and on and on. And that's not only in the private sector. Obviously, that's in the public sector, which explains why our federal, state and local business, particularly in the United States, blew way past 1 billion because they love us, and it works. And the projects are quick to implement, fast to return value on invested capital. And everybody that does business with us is referenceable and successful. And especially in public sector, they don't want to take chances with these huge programs that used to cost billions and billions. And now instead, they can do it for millions and install it, get it referenceable and make it a showcase in weeks, sometimes months or not years. Kasthuri Rangan Bill, you talked about the savings from call deflection and whatnot. As you talk to your customers, getting these savings from generative AI, are they saying, "Okay, I don't need to hire as many people because this thing is saving me money." Or are they saying, "I don't want to plant the other thing." but how are they looking at it? Are they looking to cut costs, cut down employment? And you have broader touch therefore, what does this mean for the employment market in customer support and sales and marketing? Is AI going to take away our jobs? William McDermott I'm going to give you two ways of looking at it. So first, let's talk about the CEO. There's this CEO, a really great conversation. He has done roll-ups and dominated in the insurance industry. I asked him the same question. Dominated. Incredible. And he told me, that he doesn't want his people to have a great experience, but he wants to do so much more with less. Meaning in the event the revenue doesn't go into a hockey stick, he's got to get the productivity out of the headcount that he already has. And he needs technology to bring that value out of each and every employee that he has in the company. So there's definitely a focus on the productivity. On the other hand, he is going to the M&A card and growing very fast because of the M&A, and he's acutely aware of the fact that the value of this company is based on how fast it grows. So he wants to use technology to move the top line. It's not about reducing headcount, it's about leveraging the headcount you already have. In tech, there's many more jobs available than people to fill them. Nobody ever talks about that. Millions of jobs available that aren't getting filled. Yes. And so it's a renaissance of technology that's going on in the world because every company needs to be a tech company. In fact, I think every company needs to be a software company, which is why I think the ServiceNow platform needs to be a standard in every company in the world. The other thing about what people don't know, they're genuinely afraid of. So if you think about Time Magazine in 1966, they ran a big story that computers were going to take away 90% of the jobs in the global economy. That the only people that would actually be employed were high-level managers, and the governments would have to subsidize the other 90%. Jobs would be gone and society would have to carry him because computers were going to do everything. Now I think there's like hundreds of millions of jobs that have been created on tech since then, and obviously, that never came to fruition. I believe, and I sincerely believe this that AI is the well spring of opportunity in the global economy. There are researchers that independently have said it will have an $11 trillion impact on the economy in the next handful of years. I believe that maybe true. Maybe it's 10, maybe it's 9, maybe it's 8, but it's going to be big. And the reason for that is there is so much inefficiency. There is so much waste. There is so much human potential that can be activated by taking the sole crushing work away from people and unleashing them to do things that really matter that can help companies grow and prosper. And that has never been factored into the equation as people think about technology on a day-to-day basis, that's why we're working so hard to tell them the story. Kasthuri Rangan That's great. Powerful. Tell us about your partnership with Microsoft, the Now Assist working with Office Copilot? And also, if you can, how is the Microsoft Copilot thing going for ServiceNow internally? William McDermott Yes. We have a great partnership with Microsoft, as I mentioned earlier. If you think about what I said, Copilot and Now Assist are 2 excellent, excellent Gen AI use cases for customers to consider, but they do very different things. And if you think about knowledge work and Office 365 and Teams and Dynamics, by the way, we use Dynamics at ServiceNow. These are all very important things, but they don't do what we do. So let me just give you an example. If you're in Teams and you're doing your day-to-day job. And let's just say, for example, you happen to be in the HR or the technology department and you need to order some new gear for a home office or get a computer delivered to your office because the one that you're using is broken, Microsoft can do a lot of things, but they can't do that. So Now Assist will be assisting you in getting you the form factor configuration that management signed off on, the price that you agreed to in your procurement schedule, and it will go grab what you need, in the form that you needed, at the price you bargained for and deliver it to the address you wanted, and it will be done automatically, instantaneously. And the thing that's going to really be interesting, so you could see how that's going to work for you. But then it'd be really interesting, are these AI agents? Hang with me on this. We see an enormous world of AI agents. Now the thing that's going to happen in the enterprise with AI agents is they are going to work 24 hours a day, 7 days a week, 365 days a year. We don't have to get them on any kind of health care plan. We don't really worry about them. We're just going to grind and grind and grind, and they're going to do a lot of the hard stuff. But they're going to be working with people, not just for people, and they're going to be able to think, not just act. They're going to be pretty smart, maybe even as smart, if not smarter than humans. Certainly, in certain ways they will. And this is happening now. You saw the Xanadu release. If you haven't, we got AI agents. Now this is our competitive advantage. Others will have them too, but they will have them in their main. They'll be very specialized and they'll do specialized things, which is fine. Let it happen. But what platform on an [indiscernible] is going to corral all these agents that are running around? Where you have some real [indiscernible] with your strategy on an end-to-end basis, where the finance agents work well with sales agents who understand the engineering agents. And now you've got an organization that's under control, and you have processes that are integrated and the agents are plus up and they're secure. What happens if the agents aren't governed? And this isn't a platform structure around them. You might optimize for one department, but you're not going to optimize the entity, the corporation. And that's what's happened for the last 55 years. Everybody gets to do their own thing, they self-optimize for their department, and the CEO sits back and wonders why I can execute a clear vision across the enterprise on an end-to-end basis with immediacy. Well, everything is trapped in a silo. We took down the walls. And now you can have your agents. We're not competing with other people's agents. That's good, but we'll integrate those agents into workflow automation across the enterprise in a coherent way, so you run a great company. That's going to be the big frontier. Kasthuri Rangan Next year, when you come back, everybody will have an agent. You can actually ask the agent to take notes and you can actually listen and enjoy the conversation. So on that note, Bill, it's great to have you with us.
[3]
Digital Realty Trust, Inc. (DLR) Bank of America 2024 Global Real Estate Conference (Transcript)
So welcome, everyone. Thank you for joining us this afternoon. I'm Dave Barton. I head up, U.S. and Canada Telecom and Communications Infrastructure Research for Bank of America. Thank you for joining us. I'm really, really pleased to have with me, Jordan Sadler, Head of IR for Digital Realty. And then this guy, Andy Power, CEO, also important. And we're going to talk a little bit about today, the data center industry. So thank you guys so much for joining us. Really appreciate it. So I guess I'd like to start, at a big, big picture level with maybe Andy, there's a lot of uncertainty about the political climate. There's a lot of uncertainty about the economic climate. There's a lot of uncertainty about the rates climate. What does all that mean for Digital Realty sitting here at the end of 2024 thinking about how to plan for 2025 and what should we think, how should we think -- you think about it? Andrew Power So, a few things. One, we're a global company and we're supporting 5,000 customers on six continents, 50 plus metropolitan areas. We're a massively capital intensive company, so spending billions of dollars on new footprint infrastructure capacity. We are a real REIT, so we cannot retain our capital. So we're a capital sensitive company, which go in the mix and bowls of economics and interest rates called fall-in. So all those things are obviously risks to the business that we have to operate within and execute. I'd say, the fortunate piece of the equation is the demand side has proven over numerous economic cycles at numerous vintages or places in the telecommunication technology landscape from the dawn of the internet to mobile computing to cloud computing, now to gen AI that there's just these secular tailwinds of growth that outgrow the broader macroeconomic backdrop. And I believe we are yet again seeing that play out with our customer base, many of whom, including some who just were on their earnings calls less than 24 hours ago, talking about great results, revenue growth, I think quoting the word data center 30 times, as an integral piece of infrastructure to the technology that they're building out. And that was just one example of many if you looked at the landscape of the, call it, hyperscale providers over the last several weeks or months. So they're all towards -- we are supporting things that are rising above the economic trends be it digital transformation, cloud computing, and gen AI. David Barden Yeah. I think that that's a good kind of segmentation. So that there's three big secular forces, digital transformation. I think the question might be, if the economy slows down, could that slow down or could it accelerate if the economy slows down because companies are going to be looking to save money, which of those two things you think is more important? Andrew Power What's also unique about these components of demand is they're all they're unique at different phases of their growth or maturation, but they're also very linked and coupled here, right? A digital transformation project for an enterprise customer, be it like Bank of America is going to have some place for data center and private workloads sitting inside of our four walls, probably in numerous parts around the world, moving off of on-prem locations or data centers they built and operated a long time ago. It probably definitely has the use of many clouds, so numerous cloud computing, growth interwoven in that. And I don't think anyone's thinking about digital transformation or cloud computing and not also thinking about how Gen AI is going to be supporting it down the road. So, I don't see these things as isolations that you would stop, and cut back on. And these are all pieces of business for an enterprise as the end customer that are improving their top line and their bottom line in terms of efficiencies at the same time. And you kind of see it in the call it the CIO or the IT survey reports. The rank, the prioritization of everything that IT does from data centers, servers, to mobile devices and laptops, right? And AI, data center, data transformation are top of the list of our customers' priorities. David Barden Yeah. I mean, I guess the way I've said it is, expansionary times people are looking to gain revenue by going digital and in contractionary times they're trying to save money by going digital and they're trying to do it. I've learned, being here at, Jeff Spector's Global Real Estate Conference with Sara Cooper everybody that you guys are going to jump in at some point randomly. So if you guys are going to do that, just go ahead. This one over here, that guy right. So the second thing I wanted to ask was just about the next big driver. This is a -- it's become weirdly controversial is the cloud which is that from the birth of the cloud, the idea was maybe that, well, who needs a data center if everything can live in the cloud. And then we kind of gave birth to this hybrid world where people want to keep a little bit to themselves, put a little bit in the cloud. But then more recently, in the last couple of years, we've had some regurgitation of these bare theories that the cloud is going to eat the data center marketplace. Could you kind of opine a little bit on where we live today? Andrew Power So the terminology cloud, given that we're in a very physical oriented conference here, it was probably not a great description because it gives someone the illusion that means the data just floats out to the ether and appears on our devices when it is happening in a physical infrastructure in a data center many of our 300 data centers. And I think you're right, there has been a different vintages or views on cloud. Is it -- are we going to have a one cloud world? Will AWS rule the world, right? And then move to -- I think that view has kind of been shelved to provide even the biggest proposers of cloud saying multi-cloud take the best of all the different cloud providers. And at the same time, private cloud and hybrid IT infrastructure inside customer owned servers in a facility purpose built like ours as part of the architecture as well. And you've also had, like you mentioned, customers that were born in the cloud, pop out of the cloud due to efficiency and scaling and some pop back into the cloud for some of their workloads at the same time. So I look at this as -- we at digital are supporting 5,000 customers. So corporate enterprises like Bank of America to the hyperscale cloud customers, of which those customers we're supporting in 30, 40, 50, 60 different locations around the world. And we're a physical trusted infrastructure partner for both those customers, trying to be that one-stop shop for space power and connectivity. Jordan Sadler And I would layer on there, right, in terms of IT architecture, the argument is, it's just not well struck or thought out relative to what happens across IT [Technical Difficulty] over time, right? So as Andy indicated, people are moving into the cloud and some very significant portions of which actually the large majority of large enterprises today actually expect to pull workloads off of the cloud in the near future, right, based on the same survey work that he was referencing. So we see people repatriating potentially at a bigger rate over time. Certainly, the cloud players would -- and these hyperscalers would like everybody moving into the cloud, and they will, and there are some great use cases especially for different applications. But when you look at overall architecture, there's lots of use cases that make the case for, hey, we should have some of our own compute. David Barden Right. There's a proprietary and there's elements of proprietariness. There's regulatory reasons. There's also the nature of the cloud, which is a variable cost for an institution. And so if you're going to be using things maybe in a bursty way, it's very helpful. But in terms of regular compute load, there might be owners economics to having your own facilities in a data center environment. So just before we move on from that kind of conversation, you mentioned you have 5,000 customers, but you've got some incredibly large important customers. And some of these large and important customers are subject of certain governmental scrutiny, if you will. I'm speaking specifically of TikTok and what could happen if they disappeared. How you -- how do we as a group think that you think about that? Andrew Power So what we're obviously very confidential on our customers unless they give us their blessing to use them as an advertisement of the great quality service with given the day-in and day-out. So I'm not going to comment on any specific customer name, but you can imagine as a global company that's been in business for now north of 20 years, we come into situations like this and have to think about. We're making substantial investments in each of these markets that long-lived infrastructure and the counterparty risk is another way of saying that. Will that counterparty disappear for whatever reason, could be the things you illustrated there could be credit risk or bankruptcy on the other bookend. So my handicapping the facts and circumstances, and I have no inside baseball on, call it, the geopolitical or regulatory framework we have here. I'd say, any customer that is with us in 10, 20, 30, 40, these big customers, hyperscale you talked about. They didn't start in 2023 or '24 to be our customer when we had this massive inflection in the rates for our product. So they really very likely have contracts in place with us that are very attractive to them, less attractive of what we would have signed if they would have showed up our doorstep this year. So most of the coding scenarios, you could possibly imagine vacancy in our portfolio caused by draconian events what obviously be potentially post the downtime from releasing, very economic windfalls to our bottom line, given the rates that those type of customers pay on their installed base to what we're signing in the market today. I'm definitely not routing for that for any customer and I don't think my wildness dreams have that all come up showing up at our doorstep like that. I believe there's a long road of jousting and regulatory things that could happen. And I also would say, unlike our B2B customers, Azure, AWS, Google Cloud, Oracle Cloud type of customer that has locational sensitivity and sovereign sensitivity, GDPR, they're in certain of our European countries for European data. B2C type customers than using their apps just consumer do not have that. And many of the B2C type customers put a substantial amount of infrastructure in the United States, but have most of their users outside the United States, right? So banning a B2C customer per se doesn't necessarily mean that infrastructure leaves our shore 100% at the same time. So that's a long-winded way of saying, I think our diversification below market rents, and I think a pretty low likelihood that, that benefit shows up back to us. I don't think I'll be reporting that out anytime soon. David Barden So let's talk a little bit about how business is doing. So first quarter record leasing, I think you said -- and this is the third leg of the kind of secular demands tool (ph) roughly half was AI-related. We've gone 14 minutes, and we haven't mentioned AI yet. So I might as well go ahead and bring it up. Talk to us about the opportunity that Digital Realty faces and why Digital Realty is positioned to benefit from it? Andrew Power So just to recap your accolades, which I appreciate, record first quarter, record first half, first half of this year, double the pace in the prior year. We did call out a 50% contribution for AI in the first quarter, 25% in the second quarter. I think it's important to understand how we are attacking the opportunity. We are not chasing this opportunity to unproven markets just because a customer asked us to build in a data center. We're sticking to our 50 plus metropolitan areas where we see robust and diverse customer demand that is not just AI trading model demand. Numerous cloud computing companies compute, network, enterprise demand markets that we truly believe in the locational or latency sensitivity of the applications living in those markets like Northern Virginia, Frankfurt, Singapore, etc. David Barden And could you just talk a little bit -- you've said something interesting about how demand is actually concentrating as opposed to diversifying? Andrew Power When it comes to AI, which AI is also a multifaceted piece of demand, we're seeing the preponderance of large capacity blocks from hyperscale cloud companies one in three things. One, they want large contiguous capacity blocks begin together. Two, they want it right now, like they're running to the Bodega to pick up something they forgot. Yeah, exactly. 50 or 100 megawatts. Exactly. And three, their preference is fungible markets or markets I'm referring to, where if they get their AI demands wrong at that very minute, they can put their cloud compute in that same very data center. The third leg of the stool is, it's not a must have, hence, in these core markets, they can get that third leg because the core markets have had the great supply constraints. So it's spilling over to some second-tier markets as well. The power due to power transmission generation and other trends like that. So we're doing that on the bigger, larger capacity blocks. I think on the first quarter record, we announced a big supporting Oracle and a big GPU cluster for their enterprise, AI cloud customers. We've also been supporting on the more enterprise piece of AI. We had a win with the Novo Nordisk Foundation, where we and NVIDIA building the largest supercomputer in the Nordisk, that is not a 50-megawatt type of deployment. More enterprise-oriented use cases. And I think we're both -- both of those segments are growing, but I'd say the big deals are probably taking up the most -- getting the most spotlight. And I think we're -- a, we're still getting started. Trading in my opinion is not done. Two, when we know the next leg of this is the inference. Think of the users of the applications with AI devices, be it people or technology, clearing the models, which we understand, one, be a multiple of size of the addressable market that we're already experiencing in a large and fast-moving training market and could come into capacity blocks that are not near -- do not need to have that continuous requirement. And we hope, we don't know definitively that, that blends itself back to our corporate in campuses where we had infrastructure where we're hosting cloud computing and private AI or private compute for our enterprise customers. David Barden So I think that the question then is, first, there's been so much money put into the data center industry that we've all heard about from the private guys, from others. So our team estimates that the top five, just the top five hyperscale guys are going to spend $240 billion in CapEx this year and $280 billion in CapEx next year. And you're going to spend how much? So it's not like you need all the market share, right? You -- there's a lot of runway that you've built. And it kind of just happens that you were building a runway for digital transformation then you added cloud to that and you put land and you put power and you put capabilities and common sense together. And it so happens that now when the hyperscale guys hit the panic button, go to the Bodega. Bodega, right? Andrew Power We were supporting them for their traditional needs in many, many markets for a long time. We were repositioning when we saw a few things going back several years. One, we tell this business is going global, right? We are a largely a U.S. company, call it, 10 years ago. Now we're across six continents, right? 30 metropolitan area, 50 metropolitans and 30 countries. Two, we saw everything was just starting to get bigger. Scale mattered, right? Because that the data halls, the buildings, the campuses, the runways for growth. We've been -- we had this work future proof of customers' growth and their future just kept getting better and brighter and we need to be prepositioned and ready for it. And we also lastly saw that being the full spectrum from the enterprise to the hyperscaler and everything in between was value-add to our platform. Because we're driving cloud consumption by our enterprise customers, cross connecting to the public compute. And I imagine the AI workload will follow a similar virtuous cycle as well. So that's how we have thought about this. We've got north of 3 gigawatts of land on our balance sheet, that was underwritten and mostly acquired before they were talking about GPUs, so this is obviously well positioned us to capture this well faster than we initially underwrote pull forward a lot of that capacity, and that's what we're doing right now. David Barden So I guess I want to ask my big question, which is that you had a record first half in terms of new leasing in the first half of 2024 can we beat it? So this is record a better half in the second half. What do you want to know about a quarter. Just give you the number... Jordan Sadler As I said on unlike my typical disposition on the first quarter call in conjunction with the record similar question may have been view. He was asked about, will there be another record and I'd said, listen, you don't usually call it put -- in this size of business we're talking about, you don't usually see records upon records that consecutively to that quickly, right? Because we don't build this in a modular fashion, right? We don't go speculative all these shelves or suites and so we're derisking our capital outflows. And hence, we don't have shelves lined with product to sell out the store and set records traditionally. That being said, in my commentary was, one, we're in an era where everything is in the XXXL size category is the most popular, right, the biggest capacity blocks. Two, our shelves were not bearing in that category. We've still had some very attractive, and I think that being carried into the second quarter, where Dallas led the way in terms of our signings contribution, not a record quarter, but a very respectively high quarter. And we still look back at markets like Northern Virginia and others, where we have large continuous capacity blocks, an opportunity to put up a record. And then last I said, I said I got three more shots, three more bites of the Apple, so I won't rule it out. Now that has changed. I got two more bites of the apple. I'm probably less convicted on that than I was with three shots at it, but I wouldn't roll it out still. So the question was of the hyperscale CapEx numbers that our Internet guys put together, do we have a breakdown of the $240 million to $280 billion this year and next year, that's specifically related to data centers, we don't. We know that the vast majority of it is going to be the chips and the servers and all the bits and pieces that go into the data centers, but they're absolutely self-provisioning a portion of this, We could probably double-click into that a little bit more and figure out their numbers. But I think the message I was trying to communicate was like that when you think about Equinix and Digital Realty being the two largest data centers on the planet, their spending is a small part of this huge opportunity and there's a lot to go take advantage of. Jordan Sadler I can maybe lend a little bit of a hand there. I think in our experience, we see fit out so data center fit out by these hyperscale customers buy the megawatt, right? We build for we call it 10 million a megawatt keep-it-round numbers. We see them investing in that same space at a rate of about $40 million to $50 million a megawatt in servers, racking, stacking and cabling. So per megawatt. And our -- the telecom and technology. Unidentified Participant 20%, that's the spend is on the data center. To you guys [Multiple Speakers] Yeah. Sure. I think Jordan's asked about to characterize the -- how the conversations with the big customers in the current environment. What we're seeing is a continued urgency for these large capacity blocks with the nearest term delivery with certainty around power, and it's from the same group that were the top buyers of cloud computing. But slightly expanded because some of those we're doing more self-build themselves historically, less now. We're doing deals where they did Shell deals and built inside the data centers themselves less now. And there are some new names around the trough to hyperscale today like big buyers as well, all in the backdrop where supply constraints due to power generation, transmission, substation components, switch clear sustainability concerns, moratoriums, nimbyism are called intersecting the market in many fashions, so our value add being prepositioned with all those attributes and be able to -- be able to operate is extra appreciated right now. on that backdrop. So the hyperscale business has shown proven to be the most volatile in pricing. Part of that is just, it went from earliest innings and matured as an asset class, that happened in a time period when interest rates were only going down and there -- it happened broadly in markets that didn't have supply constraints, right? And so rates for a market like in Northern Virginia got probably pushed down to the 70s. And now, call it, popped up well north of 150, and we have the potential to be, call it, printing close 200 in terms of rates as an example. That phenomenon, I don't see playing out in our enterprise colocation business as much. Hyperscale is, call it, longer-term contracts. The colo (ph) contract format is usually a shorter-term contract. There is, I'd say, a greater stickiness or less churn in the colo and we've just had more regular pricing power in terms of escalation of those contracts. So you have less of dislocation or really massive supply constraints in the colocation market like hyperscale. David Barden So let's -- we're going to run out of time here really quick. So let's kind of rewind a little bit and maybe close out that conversation. A couple of years ago, one of the challenges that Digital Realty faced was negative releasing spreads. And that was because the customers that you had signed a decade earlier became these behemoths, and they came back and they asked you for a lot better pricing at the 10th year anniversary and you had to give it to them. Are we starting a new cycle where it feels good today, but we might have a problem in the future. Andrew Power I mean there's always -- the volatility in the business always has the potential just to resurrect. I think the way we're pursuing our strategy around hyperscale, when you think about what we're doing today, we are obviously signing at better rates than we had in any priority of recent times. We're locking in the longest contracts we've ever had 15 years. We're also pushing on the escalations. I think last quarter, our biggest deal on a 3.5% rent bump, we could potentially do better than that in the coming quarters. Yes. And I think the -- what you have is the overall market saturation in 15 years' time, the locational sensitivity, there's going to be less places for these customers to build the cloud out. So I think that combination with also there's been inflation in build costs, right? There's no question that the per megawatt cost we would quote were certainly single digits for many markets years ago. We've been now throwing around 10 as a more rough swag average a day. These camps are getting bigger. It's going to take longer to build out. So the impact of inflation and the build cost could continue as well. So I don't see -- and I also believe the pain points on new capacity, yes, they may get solved, but I don't see quick solves with permanence. Yes, the southern line may come in and relieve power needs for -- no star. But they're going to need to do a Northern line after that. That's going to take a series of years, not months and other markets are going to -- that we're butting up against other, call it, uses too. When Northern just started, it was corn fields, right? And we were welcomed as an asset class. Now we got to be very good neighbors. And I think Digital Realty stands out in that in terms of where we locate our data centers next to the airport, not next to the battlefield and things like that. And I think this -- I'm picking on one particular market, but of our 50 markets, preponderance of these types of supply constraints and more thoughtful elongated development time lines. I think these are features of the business that are going to be here for some time. David Barden Yeah. I would just point out -- so thank you for that. So I would point out that today, the telco tech industrials group put out a report called who makes the data center. So if you want to know about who builds what that goes in the data center, that's a big part of it. And earlier this year, the industrials group, Andrew Obin and his team put out something that talked about the grid demand and that data centers are not the only thing that's putting attacks on the grid, it's EV, electric vehicles and also the onshoring of manufacturing, which is a big deal. I want to maybe wrap it up, Andy, by talking about something that kind of dovetails from the prior question, which was that a couple of years ago, it was negative re-leasing spreads. It was dilution from acquisitions. Last year, it was trying to fix the balance sheet in a rising interest rate environment. The AFFO per share growth guidance for this year is 0% to 1%. There's a target to get to mid-single digits and I think an aspiration to get better than that. Walk us through how Digital Realty gets from where it is now on a bottom line growth given all these great top line things that are happening to better bottom line growth. Andrew Power So we basically came out at the beginning of this year and next year 2025, net of headwinds from the deleveraging that took place over 2020 this year, that mid-single digits is the goal. And thereafter, that is not the goal of that is the floor and we would do better than that. And our path to that is through, obviously, execution on the pricing lever, our cash mark to markets on the installed base. Leasing up our vacancy, delivering on our capacity coming online, blocking and tackling. But thereafter post the headwinds from the deleveraging, we're really -- we think there's a path where we are essentially taking this demand is AI demand and clock-out demand and turn it into long 15 year contracts with 3, 3.5 maybe even higher escalations and building record backlogs to have a long runway of growth that we want to drive to the bottom line. And the M&A dilution is behind us. And the only thing that puts headwinds are things that we think are long-term goods, be it contributing to private capital sources, stabilized assets and attractive valuations or how much development we share with partners. So we're not happy with five even though that's the goal for next year. We think there's better ahead. And we want to build a long runway of comp consistently compounding the per share bottom growth as the top priority for our company for several years in the future. David Barden I think that's a great place to leave it. Thank you so much, Andy. Thank you. Appreciate it.
[4]
Digital Realty Trust, Inc. (DLR) Goldman Sachs Communacopia + Technology Conference (Transcript)
Andrew Power - President and Chief Executive Officer Okay, thanks. Okay, good afternoon, everybody. Welcome to the Goldman Sachs Communacopia + Technology Conference. My name is Jim Schneider. I'm the telecom and date centers analyst here at Goldman Sachs. It's my pleasure to welcome Digital Realty and CEO, Andy Power, with us today. Welcome. Maybe starting with a kind of persistent topic, maybe even a can't get away from it topic at this conference, which is AI. So maybe just - it seems to me like we've kind of got this situation where the data center markets got very strong demand, partly fueled by AI and very constrained sort of power and other dynamics to it. So I want to unpack those elements a little bit. Maybe starting with the supply side. Your Q1 earnings call, you mentioned working with Dominion to help address bottlenecks in Ashburn and that you were kind of cautiously optimistic about getting access to more power late in 2025, and this past quarter you talked about some of the constraints easing in Northern Virginia and other markets too by 2026. So maybe summarize for us the current state of play on the power supply side as you see it now. Andrew Power Sure. So we're operating across 50 metropolitan areas on six continents. So we've been seeing this evolve for some time now, and I'll go into Northern Virginia, Loudoun County, Ashburn in particular, but I would say this phenomenon is not episodic. Pre-AI demand trends really unfolding in data center. We were basically just running hot for a long time and we've started, whether it was digital transformation, cloud computing, IT outsourcing from on-prem to off-prem, basically running into, call it, more and more roadblocks on supply. And Ashford, Virginia was the pinnacle of this. Canary in the coal mine would probably be an understatement, given how important it is at the largest market in the world. But over two years ago now, basically said, oops, we're out of power for several years. On the backs of that, having been building, owning, operating data centers in the market for years and years and years, we worked as a partner with the utility companies looking - called our infrastructure, our substations, our idle capacity, where we could reroute infrastructure and power and make sure that all our customer commitments were met and to free up some incremental megawatts that we could sell for new customers that needed that growth in a dire capacity. We also most recently essentially granted an easement for Dominion to land in the Mars substation, which is a landmark - crucial piece to the transmission constraint. So we've, I think, been a good partner to Northern Virginia in coming to the table with solutions. Time has also passed. We're getting closer and closer to the end of 2025, the beginning of 2026, when the bottleneck is easing. But I wouldn't say these problems are going away forever because the demand has continued to remain robust and other bottlenecks have popped up. Substations are a critical piece in the components or the switch gear. And just weeks ago, there's been expressions that the timelines for new incremental deliveries in that market could be pushed out incremental, call it, 12 plus months versus prior standards. So I think what you're seeing here is that the broader, call it, supply for data centers is just going to be a more prolonged and need to be more thought out in coming aboard. And this is, again, not Ashburn, because this phenomenon is appearing in Santa Clara, just down the road, with even further out gaps in power. There's moratoriums in certain markets like Singapore or Amsterdam, experienced this recently. Dublin has had power. generation transmission issues. And it's not just about the power either. It's about sustainability concerns, broader supply chains, NIMBYism, a whole host of factors where I think the bar for this industry has been increased. And then AI has kind of arrived at a coincidentally similar time. I wouldn't say AI was the root cause that broke the straw on the back, but it certainly has made the, call it, supply/demand dynamic shift in the favor of providers. James Schneider Maybe just broadly characterize, are there still markets that are getting a lot tighter? And are there any of them that are getting looser? Andrew Power There's definitely markets that are still getting tighter, markets that you wouldn't say - didn't feel like they had much constraints at all like a Chicago or a Dallas here in the United States or certainly edging in that direction. Outside the United States, the London market is feeling power constraints. Some of our other flat markets in Europe, Asia Pacific's been a pinnacle of this because you have such - it's tougher to build in some of these markets, like a Tokyo, for example. Loosening up, one case - you could say Ashburn loosening up because we're getting close to the first, call it, big connection to come. But at the same time, I don't think that is going to be a cure-all and we'll be back to normal in how we do business in the market. So, I wouldn't say any are dramatically changing to the loosening side. James Schneider Yeah, fair. Then maybe from the demand perspective for a second. For those investors who are sort of a little bit newer to the space, can you maybe kind of explain - we've seen strong demand for a while, all the places you mentioned. Can you explain why those markets are the ones where demand has been strongest? And how concentrated is that demand among specific kind of customers and specifically hyperscalers? Andrew Power So we focus on major metropolitan areas. I mentioned 50 around the world where we see robust and diverse customer demand for hybrid IT, multi-hybrid cloud, hyperscale cloud compute, and now AI is in the mix as well. These markets have had two things going for them. One, they were the origins of the internet in some regards, and that snowball effect had just brought more infrastructure, more fiber connectivity, more access to power, and more customers landing there, and then compounded with the advent of cloud computing, where locational latency sensitivity arose, and the clouds went and picked major markets. They picked their availability zones with radius restrictions, and that has essentially made these markets as desirable as they are today. AI, which I would still describe early innings of a long process here, first innings, training, are not necessarily latency or locationally sensitive. But when we hear our customers wish list of what's important to them, for AI today, being hyperscale customers, large contiguous capacity blocks, right now, like they're running to the grocery store to pick it up, and they also use the word fungible markets, which goes back to these core markets because they're figuring this out at the same time, and they don't know if they get this wrong. They don't want to be essentially building these large infrastructure investments in markets where they can't backfill with cloud compute. So that's why I think it's an additive item to the strength of the cloud markets. James Schneider And if you go down to the next kind of second tier, third tier markets, are market conditions there like appreciably better? Are there places where you might see a bit of a spreading of demand into those markets or any of those ones actually loosening up? Andrew Power You're seeing a spillover effect to markets that maybe we had a position in data center, but it was much more network-oriented deployments, co-location, enterprise-oriented deployments, and now hyperscale is becoming a bigger piece of the puzzle. But these are markets that didn't have the benefit of being - supporting data center growth for 15 or 20 years. So they're weren't necessarily ready for the monsoon. And all the ingredients to bring on this digital infrastructure, power generation, transmission, labor forces, construction, operational talent isn't necessarily ready to absorb it. So some of these markers have popped up, and they're like, oh, we're out of power already, real quickly. So there's definitely a spillover effect. A market like Atlanta has benefited from the current fact and circumstances. There's others as well. But I've talked to probably more power companies in the last 18-24 months than I talked in my prior eight years of Digital Realty. And it doesn't seem like there's easy power data center infrastructure, the whole kit, where customers really want to go at anyone's fingertips. James Schneider Yeah. Going back to the topic of AI for a second, I think investors, most investors kind of perceive this as something that's happening mainly in hyperscalers' own data center facilities, so far at least. In your data centers, what have you seen in terms of direct AI demand for workloads and is that more weighted toward training inference and when has it become a more material part of your demand profile? Andrew Power We're not seeing the customers push the AI workload towards a self-build versus - at least capacity. It goes back to those ingredients that I mentioned previously and most of the customers were already behind on their self-build plans, not ahead of the game. So it's not like they were sitting there with these idle large gigawatt capacity blocks and big vacant shelves that they think, yeah, that's a perfect place to put AI workloads. So I think this is a resource and bandwidth and you have the key ingredient that I don't have customer environment, which is actually pushing them more towards the outsourcing. So, we had a record first quarter. 50% contribution from what we have deemed as AI workloads. The hyperscalers certainly led the way in that contribution, probably closer to 25% in the most recent quarter. Again, a big contribution from hyperscalers. We are starting to see the tail list of enterprise-oriented workloads smaller. They're megawatt, half a megawatt. And listen, this has been our bailiwick. We've been pushing the power densities, given our heritage of coming from the hyperscale, moving towards the enterprise colo. So we've been there with high performance compute, and we've been a great home to win those new applications today. James Schneider Yeah. You've been leaning into innovation with your partnerships with leaders in the industry. That's continued with your partnership with NVIDIA on the DGX-Ready solutions for colocation. Can you highlight any specific use cases where customers or clients have successfully deployed that kind of solution? Then how are enterprises actually using that kind of solution? Andrew Power So with a flagship like NVIDIA, which is now a household name, we were DGX certified years ago, probably well before the customers were even ready or using it. I believe we were first out of the gates in Japan, in Tokyo, I think, ready for the H100s. That was almost two years ago. We had a great win. We are building with NVIDIA, which will be the largest supercomputer in the Nordics for the Novo Nordisk Foundation, which is a great milestone win of that infrastructure. I still think the adoption by enterprise, especially of the maturity [indiscernible], we're still in nascent territory here. So lots of people are thinking about it, thinking about planning their data center infrastructure to be ready for it. It's the minority of the activity we're seeing today. And fortunately, cloud computing is still growing tremendously with us. Fortunately, enterprise using multi-hybrid cloud, private cloud with us, still growing tremendously with us. That's been the lion's share of record new logos adding to our 5,000 customer base. That's been the lion's share of the less than megawatt bookings that have been consistently growing quarter over quarter, year over year. So I think it's going to take some time for this technology to build out. But I think we have a big hand to play in supporting it. James Schneider Yeah. Maybe one last kind of strategic level question for you for a second. Many of your former public peers have gone private over the recent years. And some cases, because of their ownership, and they have access to a significant "private equity backstop" in terms of amount of capital available to them for new facility expansion and the like. What advantages do you have over them in terms of remaining public? And what disadvantages might come from being public? Andrew Power We really think about hyperscale business and private capital kind of hand in hand. I think it's apropos as to what comes next with a lot of those platforms and the rumor mill comes out every day almost now. So, obviously, in the public format, we've been very direct and specific that we are focused on compound and accelerating per share growth, right? In the hyperscale arena where you're spending significant amounts of dollar and longer and longer builds because the project is getting bigger and bigger, you have to sacrifice that near-term return for value creation out there in the future. We obviously are doing this in a leverage point that's investment grade. So one-third debt. They're kind of doing it the opposite playbook. Now, interest rates have changed dramatically in just the last couple of years. So the money's no longer cheap or free for these cap structures. The way Digital has really tackled is we want to basically have our fishing pole and all the poles of capital, so that when it comes to hyperscale, we can dial up private capital in various partnerships to help fund that growth for us and maintain the piece of our business that doesn't have that long development drag, that doesn't have the weaker pricing power on cash mark to markets or rental bonds or escalations, i.e. our enterprise colo business as wholly owned, and then utilize our private capital business to help, call it, accelerate that bottom line growth. I'm not sure one is better or worse. They're different. And we're trying to harness the benefits of public capital that lets us raise - we raised nearly $900 million of euro bonds in less than 24 hours on an unsecured basis at a sub 4% coupon just a day ago, or raise equity efficiently as well, but also turn to private capital when we need to use that at the same time. So tool into a box. James Schneider Yeah, okay. Maybe some technology trends that may not - Chris is elsewhere today, but I want to ask you maybe power density. Beyond power from a supply perspective, a lot of these GPU-based configurations require much, much higher densities. Previously, we're talking about 5 and 10 kilowatts per rack. Now with the H100 and the B100, we're talking about 60, even 80-plus kilowatts per rack. In your operations broadly speaking, are you seeing cannibalization of CPU computing by GPUs on a wholesale level or just in sort of pockets? Andrew Power Just go back to last quarter, which wasn't a record, but it was a pretty darn strong quarter, and 75% we would say is not AI related. So obviously, much more CPU orientation. We obviously have the bellwether in the GPU category right now, pushing the envelope to the most extreme potential builds. There's still a tremendous addressable market that says I'm going to - my infrastructure doesn't need that application, right? So we've made sure that our infrastructure has the modularity, the flexibility, the fungibility to ramp up power densities as needed. That's often done in retrofits, which we've done historically, which we've done in the last year, which we're doing right now. We've also prepositioned our new designs to be called liquid ready. Quite honestly, some of these customers that even want to use it for GPUs, their first iteration of GPUs will be air-cooled, it won't be liquid-cooled. Just because they would have had to wait for the data center to be redesigned or retrofitted, and they're not going to wait that long. And they'll think about the retrofit, their retrofit will probably be on the first or second server refresh. I think once 170 data centers go up to 150 watts, we're making sure we're ready, but at the same time, we have tremendous growth from lots of customers that are saying I'm not going to put the preponderance of my infrastructure at that power density, right? I'm going to need a mix, I'm going to need to do a retrofit a portion of my whole, so I can use a portion for GPUs, but the rest for CPUs. I think it's going to be an amalgamation of both those types of infrastructure [indiscernible]. James Schneider Yeah. So, if you look across your portfolio facilities, what percent of them are sort of like ready and able to accept significant power density GPU based workloads today, which ones aren't? And then how do you think about adding things like other kinds of retrofits like liquid cooling, et cetera, over time? Andrew Power The majority of our data centers that you would retrofit are ready for the GPUs. And the minority that are not are legacy telco hotels that have thousands of cross connects. Buildings that were never built to be a data center that are not meant for AI or GPU workloads. They're going to be the connectivity for the major metropolitan. These are these landmarks called internet gateways of 56 Marietta, 350 Cermak, Sovereign House, that just have a different use case than called AI compute. And we've been pruning our portfolio over time. We've sold out of billions of dollars of data centers that we didn't believe the long-term growth potential. Some of that was markets we didn't believe in, some of that was customer profiles, and some of that was infrastructure related. So we think we're very well positioned for where this is going in terms of retrofit that customers may need on our campuses. Again, our campuses are multiple expansive buildings with some type of proximity, if not all within a fenced area. Substations have the power infrastructure that can be already there or upgraded. If and when we need to have a more rapid densification play, I think we're very well positioned. James Schneider Got it. And then from a geographical perspective, it seems like most of the activity has been concentrated in the US. But how do you think about the footprint in EMEA and APAC? And to what extent do you expect more AI to be happening in those places as well? Andrew Power 100% accurate that we've kind of gone full circle on maturity of data center markets because the North American market was the most mature, most built out, the slowest relative growth rate versus outside the United States. But the AI first wave has certainly landed on the shores of the US first and foremost. And I still think it's infancy to globalize. Part of that's likely due to the training workload. It's kind of going to the place where you can most efficiently and expeditiously build the infrastructure you need for the GPUs and the training. When I talk to customers where I'm really getting my intel from, I believe you're going to see a globalizationist trend. I don't think you're going to see the size of the training replicate in these other markets. But it's infesting unfold, which I think - whether it's the United States or outside the United States, is going to be a much larger, call it, overall market and opportunity. I think the non-US markets will have a much greater share of that, and I think we're well positioned with these major campuses across Europe, Latin America, Africa, and the APAC. James Schneider As you think about expanding your facilities and also the differences in multiples between public and private assets out there, how are you kind of thinking about your appetite for acquiring data center assets from others versus kind of doing new organic builds yourself? Andrew Power I think the consolidation for the strategic puzzle pieces of the industry is kind of in the rear view mirror, not the front view mirror for digital in particular, and I think our major rival as well. We've kind of picked - all the chessboard pieces have been picked up quite honestly, and we've been much more focused on organic market expansion, where there isn't a connectivity oriented player to go by quite honestly. The lion's share of our capital spend has been development of new capacity in our traditional markets. We've entered a few markets in the last few years as well. So I think that's much more where we're putting our new investment dollars today, and I don't see that changing. James Schneider And you're well within - in fact, at the low end of your target leverage range now. So how do you think about your appetite for further JB investments versus 100% ownership and is there a certain kind of like IRR level you're targeting for the 100% ownership and what do you sort of target for JVs? How do you think about those two things? Andrew Power Enterprise colo interconnection oriented opportunities, especially the complexity, the attachment to our platform for a host of reasons, we keep almost entirely on balance sheet with one of examples like in South Africa with Teraco, opportunities like that. Hyperscale is where we pull the levers of private capital. We've done that on stabilized assets. We've done that now in development. And it's not about we'd like a project enough for us. We don't like it enough with a partner, but we don't like it - or vice versa. It's about - really doesn't pass muster for us. We want to own it. We want to build it ourselves. We want to own it. And using those levers to essentially fund the business that helps us to accelerate that bottom line growth for next year and then grow off that and compound thereafter. James Schneider You alluded to it when we were talking about power before, but you alluded to other factors that are constraining your ability to grow, whether it's NIMBYism, state or local regulations or the rest of it. How are you thinking about the areas in which it's most attractive for you to build new facilities today, balancing obviously the hot markets where you could build all day long if you had the power and no constraints versus the other ones? Andrew Power We're trying to make sure we're a very thoughtful and good partner to all the constituents here and make sure folks know that we're in this for the long haul. We're not in data centers or AI for trade. We're trying to build long-term, appreciating growing cash flows with long-term pricing power. And that means positioning in markets where we see runaway for growth, where we're going to be good neighbors, like next to the airport versus the housing project, things like that. We're giving back in terms of load shedding and giving back power when certain folks need it in different communities. We are looking at markets where we have different plays and potentially expanding our playbook. In most of the major US markets, we have what we call the connected campus. It's the series of the highly connected Internet gateway and one, two major campuses where customers can position their workloads and put their most latency sensitivity in the connected piece and something that's a little bit bigger on the campus along cloud compute. There's certain markets where we just have the connectivity angle and we're thinking about expanding what we have in that market, but we've been doing business in that market and we know it very well. So, those are how we're thinking about the business much more than - we're not trotting out to this one-off market that a customer says, please build me a data center there because we're going to be there for the first renewal and the second renewal. And we're going to be there and we need to make sure that we're building long-term value. James Schneider Outside of the US, how are we thinking about what markets are most attractive to build in? I'm assuming some kind of combination of where your customers have a presence, competition relatively weak, JV dollars want to go there, but how are you thinking about balancing that calculus? Andrew Power We're looking at, I'd say, going back to playbooks that have worked for us successfully, in Europe, for example, we have a real incredible position in Marseille, which is on the back of subsea cable, connectivity, now enterprise, and certainly a cloud computing market that blossomed out of nowhere for France. You look to the right and the left, Barcelona and Rome are examples, as well as what we've done in Greece. They may not become the Marseille, but we looked at those are - if we can get the right locations, the right network density, the subsea cable, we're going to build something that will last in power there - last in value, not power in terms of energy. In other markets where - Frankfurt, we're looking at - where a market that's supply constrained and we've got tremendous land holdings that we can potentially add onto on the periphery with adjacency and make something big, really even bigger and even more attractive [indiscernible] bigger in our category. So trying to exploit our advantages and continue to build a moat for our long-term business. James Schneider Yeah. Maybe just want to hit a couple of financial questions toward the end of this discussion because obviously that's what people ultimately care about to some extent. Clearly, occupancy rates have been fairly strong. You guided to one or two hundred basis points increasing occupancy for this year. You've seen pretty strong new leasing activity. Do you feel like the environment we're in now, customers are actually pulling in leasing activity, such that they see a hot market, so they want to kind of get that allocation today, and how do you expect this sort of sustainability of re-leasing spreads to go over the next several quarters? Andrew Power There's no question that, especially for the larger customers that have these AI workloads, that urgency is at the forefront. Fortunately, this is happening at a time when there's just broader supply constraints. Right? So you can only pull in so much because the market only has so much. That's why a lot of these markets get to pretty darn full capacity. The enterprise colo business, I don't think that that is experiencing the tightness from AI demands I think you're seeing there, is that ourselves and another competitor are really pulling ahead and there's not other attractive options that can deliver all its value to you as enterprise colo customer. That's certainly advantageous to our economics. On the cash and leasing spreads, we just had this new dramatic recovery in rates. I don't know, I wouldn't - if you probably would have asked me 18 months ago, 24 months ago, would they have gone where they are today, I'd probably would have not expect them to run as far as fast. I don't know whether they're going to go another leg up or not just yet, but I know my expiration schedule is stepping down year over year. So I have an opportunity. Now some of those contracts have advantageous rights to the customer, but the way things are moving with customers wanting to grow their infrastructure, intensify their infrastructure, change their infrastructure, I think we're going to have a good opportunity to either bring forward some of those renewals and capture that mark to market, even if the rate environment, the customer rate environment, stays where it is today. James Schneider Yeah. One question I get from investors often is just sort of like how you think about occupancy, utilization and how that kind of factors into all this and financials ultimately. You report a portfolio-wide occupancy rate of about 83%. Some markets like Northern Virginia in the 90s, others like London in the 60s. Can you maybe talk to why it's such a widespread in variance in that if the market is kind of as constrained as we're talking about today? Andrew Power I really think of the oxygen in two different buckets called enterprise colo orientation and hyperscale. In the enterprise colo standpoint, you always have some type of, call it, friction, call it, churn or vacancy. But I still think we've got a ways to go in that category. And I look at other examples that say that you could bring that up to closer to 90% over time. On the hyperscale side, it's obviously pushing up even higher than it's probably ever been given the quantum of pre-leasing. We're selling out of things that have been challenging. And lastly, on the margin, some of this demand for these newer upstart companies and smaller are getting pushed to tougher to lease capacity because they're not first in queue, unfortunately, for the likes of our biggest customers. So that's a long-winded way of saying that we're actively taking measures on what we dispose of, what we monetize, how we're leasing and how we're executing to continue to push that occupancy north. James Schneider Yeah. And clearly another kind of topic that I think is central to a lot of investors' minds right now is sort of getting back to FFO per share growth for the company and what run rate of growth you ultimately sustain at over the next 12, 18-plus, 24 months. How do you encourage investors to kind of think about the algorithm for FFO per share growth? Andrew Power We had to take a lot of steps backwards of last several years to take big steps forward now. And we understand that the driver of our value is accelerating, compounding bottom line per share growth. And that's why we may be the only company that essentially gave two years out guidance on our earnings call. And we said, call it, mid-single digits. And that's even muted because the path of deleveraging, selling outs, joint venturing, which are long-term rate plays, as well as other deleveraging, are still going to be impacting us in 2025. And what we said second is that's not the new go-for, that's the new floor. And where we go from there is north of that mid-single digits. And the levers we're playing with then is how much development we do or how we fund that development really. So we're really trying to essentially continue to obviously put up the number we said last year for next year, make that the floor, not the bogey, and then continue to grow off that. But also, at the same time, these levers are pulling to make that runway of growth the longest possible, right? Because these developments we're doing, these are good ROI projects. They're just raising your per share growth when they don't produce any income and they're big, heavy capital projects. So we have religion on this topic. James Schneider Finally, just to leave everybody with one question, which is just broadly speaking, when you talk to investors, you've spoken to a lot of them over the last couple of weeks, I think, what is the one thing that you think is misunderstood, if anything, about the Digital Realty story and what's the one thing you would encourage investors to focus on? Andrew Power So what I described is that, especially it's very apropos for this, Digital Realty, I view, as a small but mighty boat in a big ocean of AI right now that has not had the ups or the downs of some of our best customers and partners out there. And I know there's a lot of bulls and bears on the topic of AI. What I can tell you is this theme is additive to our business tremendously. From what we hear from our customers, it's not ending. It's still getting started. It's going to be multifaceted how I think we will support it. We've been almost always talking about big deals, big, large GPU training type thing. But how customers use this, how enterprises engage with us, where they put that infrastructure, that's really a long tail of growth, I believe. And lastly, while we're in this portion of the AI movie or whatever analogy you want to use, we are winning business that we're converting into long-term recurring revenue streams at attractive ROIs, the highest escalations, 3%, 3.5%, 4% contractual bumps, the longer durations, 15 years, and we're doing it in core markets. We're not betting on new markets that are going to be there in 10 or 15 years. We're doing in core markets where even if we're wrong, there's other demand that is fomenting, be it cloud computing, enterprise IT. So it's a different way, I believe, to benefit and support this long-term secular growth. And I think we're executing against it quite well. James Schneider Well, unfortunately, we've got 50 more questions and no more time, but that was a great conversation. So thanks, Andy, for being with us today. Appreciate it.
[5]
Dell Technologies Inc. (DELL) Goldman Sachs Communacopia + Technology Conference (Transcript)
Jeff Clarke - Vice Chairman and Chief Operating Officer Great. Thank you, everybody, for joining the session. Welcome to the Dell Technologies keynote fireside chat at the Goldman Sachs Communacopia and Technology Conference. I have the privilege of introducing - introducing Jeff Clarke, who is the Vice Chairman and Chief Operating Officer at Dell. Jeff has been at Dell since 1997 and is responsible for running the company's day-to-day business operations and setting long-term strategy. My name is Mike Ng, and I cover Dell and hardware here at Goldman. We have about 35 minutes for today's discussion. So first, thank you so much for being here, Jeff. It's a real pleasure to have you here. So I've heard that last month, you celebrated your 37th anniversary at Dell. So congratulations on that. And I think as a result, you've had a front seat to some of the biggest technological changes that have happened, not only at the company but also in the industry more broadly. I was wondering if you could first just talk about the technological shifts that we're seeing today, particularly as it relates to AI and whether you think that it's different, transformative relative to what we may have seen in terms of technological shifts in the past. Jeff Clarke Sure. Maybe just a quick context. If you think about what's happened over the past many years, it took -- think about adoption rates, 50 years for all U.S. households to get electricity. It was roughly 30 years to get 90% adoption rate at the Internet. It's roughly 35 years to get adoption of PCs to 90% in roughly 25 years for phones to make it to 90%. This one is really different. I've not seen anything move this fast. A data point or at least for us, a year ago in Q2, essentially 0% revenue. And in the most recently reported Q2, 40% of our server and networking revenues were AI revenues. This thing is moving incredibly fast. In the last 12 months or four quarters, if you prefer, we've sold nearly $9.5 billion of AI infrastructure, shipped $6.5 billion of AI infrastructure. It is extraordinarily fast. And if you think about it, I've made some bold predictions that by 2026, over half of data center demand will be AI. We look at two orders of magnitude and computational intensity growth over the remainder of the decade, so 27 quotas. [ph] You got token growth at 151 times through the next handful of years. We haven't even talked about inference and inference will be 90% of the AI workload by the end of the decade. There's nothing that's moved this fast. And I get asked why. And this is an inflection point, there's a fundamental change in the technology stack. This notion of accelerated computing is replacing a lot of manual work and human work with computers. It's driving new levels of productivity that haven't been seen since probably the industrial revolution. There's not many gifts of 20% and 30% productivity that I can think of is certainly in my working lifetime. And as a result, this is a conversation every boardroom. It's going to change the basis and is changing the basis of competition. And if you don't have a strategy here, it's an existential threat. You will be out of business. That's why this is different. Mike Ng That's great. Can you talk a little bit about how Dell plays into these AI infrastructure investments that we're seeing across the industry today. Naturally, there's been a tremendous amount of investments by hyperscalers, but also, I'll call them Tier 2 cloud, AI, CSPs, enterprises and sovereigns. If we were to think about those customer segments or whatever segment you think would be appropriate to address, where does Dell play best? Jeff Clarke Well, I think where Dell plays best is helping all of our customers, large and small, large cloud service providers, sovereign wealth funds, small businesses, deploy AI at scale, helping them develop a strategy, implement, ultimately adopt and scale that. And the way we've communicated that broadly to the marketplace is through AI factories, big ones, small ones. Clearly, when we talk about the largest opportunities where there is a considerable amount of consumption of infrastructure and the largest CSPs that are our customers. These are very bespoke designs, they're very custom designs. And what we've been doing is really driving differentiation in our offer. It's beyond the server. It's easy to say I have a GPU, I put a GPU in a server, and I have one of these things. These are very complex systems. And where we're distinguishing ourselves in the marketplace today and getting a premium for it is the engineering. This is a technical pursuit. It's not a sales pursuit. It's a technical pursuit. It's an architectural pursuit. And we are winning with whether it's density, whether it's power, whether it's cooling, ultimately rack integration and then the ability to deploy those racks at scale in very short periods of time. I think if there's anything, these opportunities have really helped us with is really sharpen our edge that we are much quicker than we used to be more responsive than maybe you remember, Dell being [ph] We are unbelievable or responsive in these very large custom designs, and we're winning for it. So it's beyond the box, it's networking, it's storage. It's the services that encompass the solution that is why we're winning and that's very scalable to enterprise. Mike Ng That's great. And just on that point around the customization, the engineering value add, what is that really driven by? Is there some sort of gap in reference designs that are out there? Or are there specific workloads that your customers are seeking to address with AI infrastructure and Dell helps to accomplish, you know, the... Jeff Clarke The reference designs that exist are very good starting points. But our customers want more density. So whether that started at 64 GPUs in Iraq, now at 72. We were the first to put 72, the first to 96. We're working on designs that are well beyond that. Looking at what used to be a data center tile, if you will, of 10 to 15 kilowatts of power now at 50 kilowatts of power, 75 kilowatts of power, now designing over 100 into next year at 200 and beyond that at 400 and beyond. It's that engineering confidence that the reference designs allow us the latitude to design with as a basis. You think about what we've done with whether it's our 6U compared to our competitors, 8U, they're moving to 6U. We're moving to a 4U. We're getting more IO slots in it, so we get better performance. We get the GPUs in there. We're able to now put together a high-bandwidth fabric with low latency for the networking, that's where we're winning on the engineering side. We started with air cooling. Others will talk about other forms of cooling. You don't need it until you get to certain energy densities. Typically, it's 1,000 watts where you have enough energy density to meet direct liquid cooling. The ability to engineer that solution at scale. That's what we're doing. We -- as a former engineer, they don't want to call myself engineer anymore. But as a former engineer the ability to differentiate and to help these customers solve their specific problem is immense. And it's why, again, I think we're getting a premium in the marketplace for it and winning. Mike Ng Yes. I want to ask about earnings, which just happened two weeks ago for Dell. Dell posted a quarter where EBIT margins for the ISG segment were able to improve nearly 300 basis points sequentially AI server margins also grew. I was wondering if you could just give a postmortem on that piece. What's been driving the margin expansion within ISG? What's driving the margin expansion in AI servers? Jeff Clarke For us, it was a very solid quarter, as you mentioned, but the highlights $25 billion revenue, 9% growth, EPS growth of 9% of $1.89. Free cash flow, $1.3 billion, gave back $1 billion of shareholder returns. And within that, you had ISG grow at 38%. And then at 38%, you had certainly what we talked about AI being a huge component of that. And it's great to see, and we improve margins. Part of it is we're selling at a premium in the marketplace and extracting more value than just out of the box. Again, our ability to sell the services and deploy at scale, networking and the storage subsystems that go around these integrated rack solutions is incrementally allowing us to make better margins than we did the previous quarter. On the storage side, it's a great story, at least from my seat. We sold more Dell IP storage. So our PowerMax product, our PowerStore product in the mid-range, our power scale, our file and object portfolio, our PowerProtect data domain product all grew in demand double digits, all expanded their margins. So you take our most profitable products, they grow. They expand their margins in the most important region in North America. We do a little better managing discounting, and we find new value streams around data rate reduction, and you have a recipe of why our storage margins expanded when storage margins do well, ISG does well. And then the third component of the ISG business is our server business, which continued -- or what we call the traditional server business, that continues to grow quite nicely, five consecutive quarters of sequential growth, three consecutive quarters of year-over-year growth. We got things happen over there in ISG. Mike Ng Yeah. And I don't want to understate the strength in storage because two quarters ago, storage was in a slightly tougher place more third-party IP mix that was a bit of a drag on margins. Maybe you can talk about what changed in those few months? Jeff Clarke Well, clearly, the challenges we had in Q1 you referenced correctly, is we had a mix towards partner IP at the expense of Dell IP and that ratio change from Q1 to Q2, which obviously helped. Look, I think were a couple of things that we are navigating. One is a dynamic relationship with our partner IP, and specifically VMware and working through the changes in how the go-to-market is going to work. There's no question as we've navigated through that, it slowed the business a little bit. But I'd also tell you that I think the real driver is modern workloads are really favoring Tier 3 architect to what we call a 3-tier architecture for its performance, its scalability and efficiency, and we're seeing that in these new modern workloads take off. So we're optimistic. I think in our guidance, we reflected we think our storage business is going to grow in the second half of the year, standing by it. Mike Ng Great. That's great. And you talked about the traditional server strength, right? Five consecutive quarters of quarter-on-quarter growth. It's great to see in what still feels like a somewhat cautious IT spending environment. Are we at the point where you're ready to call an inflection or traditional servers set to return to more normalized growth from here? And is there any disruption from the AI investments that are happening in the market? Jeff Clarke It's a good question. I've been asked that all day today I'm certainly not ready to sit here and say the markets recovered and it's back to where it was pre this digestion period. But I think there are three signals from what we see that help communicate the markets recovering. The first one would be when we think about it is we went through the longest digestion period in the history of the server marketplace eight quarters. Data centers are full of older product. The older product walk capable and it was bought on a relative basis today are very, very different in productivity capability. So you have an aging of the data center that didn't get refreshed for a 2-year cycle. That's never happened. And we're beginning to see large customers with large bids begin to refresh. And again, not calling recovered, but recovering, and we find that as a very encouraging signal. The second component, which I think is somewhat related, as customers are looking at their AI plans or desires, they're quickly coming to the conclusion I need space and I need power. So we think the consolidation is occurring that you can consolidate older servers with newer servers and consume less space and consume less power. And we think that consolidation is, I think, a very important consideration with AI. And here's why we think there is consolidation going. Our TRUs continue to expand. They're driven by more cores, more DRAM and more NAND per server sold. I gave three quick references earlier today. I think it maybe for the broader audience, that would be helpful. Today, a 16G server, which is what we ship today versus what we have shipped 4 and 5 years ago, has 3 times more as many cores in it as that old product. It's 25% to 35% more power efficient, and it can roughly replace 3 to 5 old servers with a new server. So that space, that's power that is now available for AI. And then the third component that we think is driving traditional server demand is the continued repatriation of workloads back to on-prem into on-prem private clouds. So it's that backdrop that we think lends itself to the traditional server market recovering and continuing to do well. Mike Ng That's great. And as a follow-up to that, there certainly is server demand because of growing in new workloads through repatriation of workloads from the cloud. How do you think about this whole consolidation concept? One new server replacing 3 to 5 old ones? Are you indifferent to that because the content for the new server is that much higher. So from a revenue and profitability perspective... Jeff Clarke Yes, Mike, I think the way at least we look at it is the unit -- the number of units may not be what once ever was, but the value of the unit is going up considerably higher. If you look at our ASP growth over the past handful of years, it's considerable in servers, and it's all driven by the three things that I just mentioned, more cores, some more powerful PCs and every -- or more powerful microprocessors in every server, more DRAM around it and more storage. I'm fine with that. That's -- it's good business. It's helping our customers consolidate workloads, be more efficient. It opens up more storage opportunities and provides the space and power for what is coming AI workloads to the enterprise. Mike Ng Yeah. And if I could shift back to AI servers for a moment, Dell has clearly found a tremendous amount of momentum around AI CSPs, large cloud companies. Could you talk a little bit about where we are in enterprise and also sovereign investments in AI infrastructure? I think there's a bit of a debate whether or not those customers will materialize in a way to continued growth in the segment to the extent that there's any slowdown among like AI CSPs or Tier 2 cloud? Jeff Clarke Yeah. I don't have the sentiment that it's not going to materialize in enterprise. In fact, just the opposite. There's no question AIs coming to the enterprise. One through all the data is, and data is very expensive to move. And in many cases, that is proprietary is unique. It's part of your business model, your value add, your secret sauce. It's not going to be transferred into other things. So data gravity is clearly driving AI over time to the enterprise. And maybe another way to look at it is if you look at the most -- the five foundational models, their current rev, they've been trained on, you pick your number 30, 40, 50 terabytes of data. The Dell company has hundreds of petabytes of data that we're not unique. Let's just say it's all value-added data for the moment. What we're going to want to train more importantly, we're going to do fine tuning and run inference on our data to make us serve our customers better. And every customer is going to go through that same sort of calculus. They're going to try to understand what part of their data allows them to serve their customers better, produce their products and services better, serve their customers better in terms of services or end user services. And that's what's in front of us. So when we talk about enterprise demand. I think I made reference on the earnings call, the number of enterprise customers grew from Q1 to Q2. It was bigger than it was in Q4. The amount of revenue dollars coming in, in Q2 from enterprise customers was greater than Q1, which was greater than Q4. And what we're trying to do with this 5-quarter pipeline comment that we make as a reference of the opportunities we see in front of us. And that, I think I made reference to the number of customers and that is growing and the number of revenue dollars is growing. So continuing to signal that we think enterprises are moving, nowhere near as fast as these handful of customers buying large clusters, and they're not going to deploy it as large clusters. They're going to deploy it as a use module here and a usage model here and a use model there. We're seeing customers experiment. In some cases, experiment going to proof-of-concept and in many cases, going from proof-of-concept into production. Five primary use cases really are driving what we see in enterprise. One is around cogeneration. Two is around agents and sales assistance. Three is around content creation, content automation. Four is around customer service and the fifth is around supply chain. Those use cases are fairly universal across most companies, and it's where we see AI being deployed in time and where data is. That was a lot of. So I hope that was helpful. Mike Ng Yes, it was helpful. I love it. And in terms of enterprise interest, are there any industry verticals that are more front-footed and making these AI infrastructure investments than others? Jeff Clarke So if we look at customers and enterprise, there's a subset of this community that is very aggressive in pursuing AI the quant traders. Well, I love this stuff. All right. Every second matters, right, every millisecond or nanosecond matter. So I think about what's happening in National labs and deployment in National labs solving significant science problems. I think a pharmaceutical. I think of what's happening in health care, whether that's looking at radiology film and interpreting that or back to pharmaceuticals, looking at proteins and breaking down proteins to help solve new drug opportunities or health care opportunities. Industrials, manufacturing, clearly are customers buying today and then oil and gas. And then the institutions of higher learning universities across the globe. So a pretty good smattering, one bigger than the other, but all moving in that proof of concept and finding use cases where they can deploy AI gear. And again, for us, it's not the box. These are very complicated systems to build and we use the word systems of helping them with the network fabric, the storage subsystem and the surrounding services to help them deploy. We use a reference of L11 to L13, which is essentially rack-scale integration testing in our facilities, transportation to our customer site, out of the truck, test it, roll it in, install it and then service it. That's we think very repeatable all the way through the enterprise and to the small companies. Mike Ng Yes. And on that installation servicing maintenance piece, could you talk about how important services might be for an AI system or an AI server installation and contrast that with traditional. Is there a notable difference? Jeff Clarke Yes. Maybe I'll give you a real life example. So we just introduced this notion of an integrated rack scale system. And in that, let's take -- we talked about 96 GPUs per rack and you deploy that and let's take a hypothetical 50,000 GPU cluster. It's 52 megawatts, 6,144 nodes, 4,000 switches, 150,000 network connections, 200,000 opticals, a couple of hundred miles of network cable, a mile and a half of water pipe, 30 miles or so of rubber hose and other types of dissipation of power and someone's going to look that up. It's kind of hard. And so the reason I say this because you can't do one part of it or if you do, you need a lot of help. And what we've been doing is building our capabilities to be able to deploy, whether that is a 500 GPU cluster or a 50,000 GPU cluster, everything in between the technical skills to do it. In fact, the pursuits themselves are very technical engineering-oriented I'm involved in all of those. It's fun again. It's really cool talking to customers about architecture, workload, design and then building it for them. And again, I think that's very applicable to what we'll see in the enterprise, helping it and our job is to build AI factories that are easy to deploy to make AI easy for our customers. Mike Ng That's great. I want to go back to something that you mentioned earlier when we were talking about enterprise investments, which was storage, data gravity, the cost to move data. And I was wondering if you could just talk about whether there is some sort of difference in storage solution required for AI workloads and AI infrastructure relative to traditional enterprise storage and whether Dell has to do anything from a new product innovation perspective to... So if you look at the attributes of what's happening, what we'll loosely define as modern workload specific example being AI. These modern workloads, AI are very demanding. And typically, our bare metal deployments are typically container-based and or multi-hypervisor base. So if you think about it, those are the requirements. And then within that, there's requirements, performance is a given. It's just -- these have to be high-performance storage subsystems. They want flexibility. They want scalability. They want efficiency and ultimately want to be able to drive cost. And when we look at that, for us, we believe that maps to our 3-tier architecture. And the reason for that is because you can independently scale compute, networking and storage, where an alternative architecture like HCI really great for ease of use and ease of scale, but you can't independently scale networking, compute and storage, they all scale together. In other words, if you need more storage, you have to buy more compute. If you need more compute, you have to buy more storage and so on. That's not the case with the independent nature of 3-tier architecture. Then specifically, what's happening is it's file and object. Most of what's occurring or what's being trained or what we see being trained in the future and ultimately run inference on is file and object, scale-out file and object subsystems like our PowerScale product. The other one in these very high-performance foundational model training is around parallel file systems, which is why in May, we made an announcement of Project Lightning that we're developing our own grounds up AI-based parallel file system, many that exist today came from out of the realm of HPC. I understand why that's the case, and they're solving a considerable part of the problem. We think the ability to design that with low latency persistent memory, the ability to bring high throughput in their end and have many concurrent users to really deal with the transient end use that is done with in parallel fill systems is an advantage we'll bring to the table. So we're developing that. And then our file and object portfolio is very robust, and that's helping us win in the storage space inside AI. Does that help? Mike Ng That's incredibly helpful. Yes. Thank you for that. I'd like to move on to the Client Solutions Group part of Dell's business. As we move towards the end of 2024, where, I guess, 3 years from peak PC shipments, which was at the end of -- or 2021, I guess where are we on the PC refresh at this stage? What's gone better than expected? What may have fallen a little short relative to what we may have thought at the beginning of the year? Jeff Clarke Well, at the beginning of the year, I thought we'd be talking about a PC refresh. And today, it's -- we're struggling to even use the word refresh. It hasn't happened and I don't see it happening in the next weeks. I think it's pushing. Clearly, we reflected on our guidance that it's pushing into the end of the year and into next year. I remain very bullish about the opportunities around a PC refresh. You might ask why. Well, first of all, the installed base is old. It's never been older. It's large and old. The first COVID systems are 4 years and 5 months old now. And they were all notebooks because we sent everybody home and had to be mobile and the average notebook is a 3 to 3.5-year replacement cycle or 4 plus. So they're not designed to work for plus. So you have this aging component, it's more notebooks and it has to move. The second thing is you have this forcing function called Windows 10 in the life. That's not moving. October '25, Windows 10 no longer, and we're one more quarter closer to it. So you have this wave that's mounting, if you will, so to speak, that has to transition. History tells us the more you can press against the Windows retirement, the more that rebound happens almost instantaneously because you have to clear to move out those systems. I read some research the other day that said 61% of all Windows licenses in North America are still Windows 10. That's considerably more at the same point in time, 14 months ahead of the Windows 7 in the life, which tells you it's pushing right, but it's also mounting. And then the third component, again, is a former PC designer. I mean there's some cool stuff coming. We're going to be NPUs in all of these things. We're going to see an application-based develop here in short order that is underway in developing that the greatest general-purpose device productivity device on the planet, the PC is only going to get better. They're all going to have capable NPUs in them. When I'm sitting here doing something you can't do, you're going to go what you have and go mine has an MPU and it your doesn't, you're going to want what I have. And from a historical perspective in the PC, hardware has always led software. So it's not uncommon for us to have hardware capability in the product before the softer wave happens. And I think that's what we're set up for here with a purpose-built accelerator and MPU in every PC. Again, I've been bold, I said by the end of the decade, I think the installed base flips over and everything is going to be an AI PC. We've got a little work to do between now and the end of the decade, but I'm excited about it. Well, there's a coiled spring of PC demand with October 2025, well, right? So - well, there's announcements last week by Intel of the next-generation processor coming with a more capable NPU. So it's continuing. Mike Ng Great. Just while we're on the topic of PCs, Dell has leading ASPs across the industry if the third-party industry market research is to be believed. What is driving that relative to the broader industry? Is it the premium consumer mix? Is it the commercial mix function of both? Jeff Clarke Well, I think it's a little bit of both. So the way we look at it, our ASPs are roughly twice the industry average. Industry is 600 and change and you can do our math, it's 1,200 and change. So roughly 2x. What drives that? Clearly, a greater mix to commercial. Our commercial-to-consumer mix is 85-15 [ph] significantly higher than the rest of the industry. In general, the way we sell with our direct sales force, we sell a higher mix. So in general, our CPU is a more capable CPU, higher resolution screen, more DRAM, more storage than the average PC driving that higher or that delta between the two. I think our direct model of driving attach docs, displays, services drives a differentiated margin in a differentiated ASP. So that opportunity around business model, mix, selling technique and a commitment to drive the things around the box is very important to us. And it's why you've probably seen recently, we've talked about in forms like this, extending that PC reach to more peripherals. There's a $200 opportunity around every unit for speakers, keyboard, mice, cameras, web cams, et cetera, we're trying to tap into that increasingly more so because it drives greater value. Mike Ng That's great. In the last couple of minutes here, maybe I'll ask you to make a few closing remarks on where you see Dell heading over the next few years. Are there any specific themes or factors that you want investors to keep on top of their mind. Jeff Clarke Well, again, I think if - I'd like the audience to leave, look, we're disciplined operators. We're committed to consistent revenue growth, consistent profit growth. We run the company on free cash flow generation, and we made a commitment of capital distribution back to our shareholders. We have a long-term value framework that we believe we can operate within. And we're committed to that. Again, we're disciplined operators. I think what's lost sometimes is the fact that we can't apply what we do to other sectors. And I think AI is actually demonstrating that the Dell disciplined model, the operators that we are -- we believe we are can absolutely apply to new growth categories. And it's -- and not only is it this -- you may think of as Dell being now 40 years old and maybe slow, not at all. We're architecting significant GPU cluster design in handfuls of weeks and responding to customers in days and winning and winning with superior design, superior services, I think about the four things that make us special is our large go-to-market presence, our supply chain, our R&D and innovation engine and our services. And we can apply that to broader categories, AI be an example. We're having success in 5G telco. The Edge is another opportunity for us, and we're going to continue to do that. That's what we do. Mike Ng Jeff, that's a great way to cap off the session. Thank you so much.
[6]
NVIDIA Corporation (NVDA) CEO Jensen Huang presents at Goldman Sachs Communacopia + Technology Conference (Transcript)
I flew in late last night. I didn't really expect to be on stage at 7:20 in the morning, but seems everybody else did. So, here we are. Jensen, thank you for being here. I'm delighted to be here. Thank you all for being here. I hope everybody has been enjoying the conference. It's a fantastic event, lots of great companies, a couple of thousand people here. And so really terrific. And obviously, a real highlight and a real privilege to have Jensen, President and CEO of NVIDIA here. Since you found NVIDIA in 1993, you've pioneered accelerating computing. The company's invention of the GPU in 1999 sparked the growth of the PC gaming market, redefining computers and igniting the era of modern AI. Jensen holds a BSEE degree from Oregon State University and an MSEE degree from Stanford. And so I want to start by welcoming you, Jensen. Everybody, please welcome Jensen to the stage. So we're going to try to do this really casually, and I'm going to try to get you talking about some things that I know you're passionate about. But I just want to start, 31 years ago, founded the company, you've transformed yourself from a gaming-centric GPU company to one that offers a broad range of hardware, software to the data center industry. And I'd just like you to start by talking a little bit about the journey. When you started, what were you thinking, how has it evolved because it's been a pretty extraordinary journey. And then maybe you can break from that, just talk a little bit as you position forward on your key priorities and how you're looking at the world going forward? Jensen Huang Yeah, David, it's great to be here. The thing that we got right, I would say, is that we -- our vision that there would be another form of computing that could augment general purpose computing to solve problems that a general purpose instrument won't ever be good at. And that processor would start out doing something that is -- that was insanely hard for CPUs to do and it was computer graphics, but that we would expand that over time to do other things. The first thing that we chose, of course, was image processing, which is complementary to computer graphics. We extended it to physics simulation because in the domain -- the application domain that we selected video games, you want it to be beautiful, but you also want it to be dynamic to create virtual worlds. We took step by step by step and we took it into scientific computing beyond that. One of the first applications was molecular dynamic simulation. Another was seismic processing, which is basically inverse physics. Seismic processing is very similar to CT reconstruction, another form of inverse physics. And so we just took it step by step by step, reasoned about complementary types of algorithms, adjacent industries, kind of solved our way here, if you will. But the common vision at the time was that accelerated computing would be able to solve problems that are interesting. And that if we were able to keep the architecture consistent, meaning I have an architecture where software that you develop today could run on a large installed base that you've left behind and the software that you created in the past would be accelerated even further by new technology. This way of thinking about architecture compatibility, creating large installed base, taking the software investment of the ecosystem along with us, that psychology started in 1993 and we carried it to this day, which is the reason why NVIDIA's CUDA has such a massive install base and then -- because we always protected it. Protecting the investment of software developers has been the number one priority of our company since the very beginning. And going forward, some of the things that we solved along the way, of course, learning how to be a founder, learning how to be a CEO, learning how to conduct a business, learning how to build a company... These are all new skills and we're just kind of like learning how to invent the modern computer gaming industry. NVIDIA -- people don't know this, but NVIDIA is the largest install base of video game architecture in the world. GeForce is some 300 million gamers in the world, still growing incredibly well, super vibrant. And so I think the -- every single time we had to go and enter into a new market, we had to learn new algorithms, new market dynamics, create new ecosystems. And the reason why we have to do that is because unlike a general purpose computer, if you built that processor, then everything eventually just kind of works. But we're an accelerated computer, which means the question you have to ask yourself is, what do you accelerate? There's no such thing as a universal accelerator because, yeah... Toshiya Hari Dig down on this a little bit deeper, just talk about the differences between general purpose and accelerating computing. Jensen Huang If you look at software, out of your body of software that you wrote, there's a lot of file IO, there's a -- setting up the data structure, there's a part of the software inside, which has some of the magic kernels, the magic algorithms. And these algorithms are different depending on whether it's computer graphics or image processing or whatever it happens to be. It could be fluids, it could be particles, it could be inverse physics as I mentioned, it could be image domain type stuff. And so all these different algorithms are different. And if you created a processor that is somehow really, really good at those algorithms and you complement the CPU where the CPU does whatever it's good at, then theoretically, you could take an application and speed it up tremendously. And the reason for that is because usually some 5%, 10% of the code represents 99.999% of the runtime. And so if you take that 5% of the code and you offloaded it on our accelerator, then technically, you should be able to speed up the application 100 times. And it's not abnormal that we do that. It's not unusual. And so we'll speed up image processing by 500 times. And now we do data processing. Data processing is one of my favorite applications because almost everything related to machine learning, which is a data-driven way of doing software, data processing has evolved. It could be SQL data processing, it could be Spark type of data processing, it could be a vector database type of processing, all kinds of different ways of processing either unstructured data or structured data, which is data frames and we accelerate the living daylights out of that. But in order to do that, you have to create that library, that fancy library on top. And in the case of computer graphics, we were fortunate to have Silicon Graphics' OpenGL and Microsoft DirectX. But outside of those, no libraries really existed. And so for example, one of our most famous libraries is a library kind of like SQL is a library. SQL is a library for in-storage computing. We created a library called cuDNN. cuDNN is the world's first neural network computing library. And so we have cuDNN, we have cuOpt for combinatory optimization, we have cuQuantum for quantum simulation and emulation, all kinds of different libraries, cuDF for data frame processing, for example, SQL. And so all these different libraries have to be invented that takes the algorithms that run in the application and refactor those algorithms in a way that our accelerators can run. And if you use those libraries, then you get 100x speed up. Incredible. And so the concept is simple and it made a lot of sense, but the problem is, how do you go and invent all these algorithms and cause the video game industry to use it, write these algorithms, cause the entire seismic processing and energy industry to use it, write a new algorithm and cause the entire AI industry to use it. You see what I'm saying? And so these libraries -- every single one of these libraries, first, we had to do the computer science. Second, we have to go through the ecosystem development. And we have to go convince everybody to use it and then what kind of computers does it want to run on, all the different computers are different. And so we just did it one domain after another domain after another domain. We have a rich library for self-driving cars. We have a fantastic library for robotics, incredible library for virtual screening, whether it's physics based virtual screening or neural network based virtual screen, incredible library for climate tech. And so one domain after another domain. And so we have to go meet friends and create the market. And so what NVIDIA is really good at, as it turns out, is creating new markets. And we just -- we've done it for now so long that it seems like NVIDIA's accelerated computing is everywhere, but we really had to do it one at a time, one industry at a time. Toshiya Hari So, I know that many investors in the audience are super focused on the data center market. And it would be interesting to kind of get your perspective -- the company's perspective, on the medium and long-term opportunity set. Obviously, your industry is enabling, your term, the next industrial revolution. What are the challenges the industry faces? Talk a little bit about how you view the data center market as we sit here today. Jensen Huang There are two things that are happening at the same time, and it gets conflated, and it's helpful to tease apart. So the first thing, let's start with a condition where there's no AI at all. Well, in a world where there's no AI at all, general purpose computing has run out of steam still. And so we know that Dennard scaling, for all the people in the room that enjoy semiconductor physics, Dennard scaling and Mead-Conway's shrinking of transistors, scaling of transistors, and Dennard scaling of a ISO power increased performance, or ISO cost increasing performance, that -- those days are over. And so we're not going to see CPUs, general purpose computers that are going to be twice as fast every year ever again. We'll be lucky if we see it twice as fast every 10 years. Now, Moore's Law -- remember back in the old days, Moore's law was 10 times every five years, 100 times every 10 years. And so all we have to do is just wait for the CPUs to get faster. And as the world's data centers continue to process more information, CPUs got twice as fast every single year. And so we didn't see computation inflation, but now that's ended. We're seeing computation inflation. And so the thing that we have to do is we have to accelerate everything we can. If you're doing SQL processing, accelerate that. If you're doing any kind of data processing at all, accelerate that. If you're doing -- if you're creating an Internet company and you have a recommender system, absolutely accelerate it and they're now fully accelerated. This, a few years ago was all running on CPUs, but now the world's largest data processing engine, which is a recommender system, is all accelerated now. And so if you have recommender systems, if you have search systems, any large scale processing of any large amounts of data, you have to just accelerate that. And so the first thing that's going to happen is the world's trillion dollars of general purpose data centers are going to get modernized into accelerated computing. That's going to happen no matter what. That's going to happen no matter what. And the reason for that is, as I described, Moore's Law is over. And so the first dynamic you're going to see is the densification of computers. These giant data centers are super inefficient because it's filled with air, and air is a lousy conductor of electricity. And so what we want to do is take that few, call it, 50, 100, 200-megawatt data center, which is sprawling, and you densify it into a really, really small data center. And so if you look at one of our server racks, NVIDIA server racks look expensive, and it could be a couple of million dollars per rack, but it replaces but it replaces thousands of nodes. The amazing thing is, just the cables of connecting old general purpose computing systems cost more than replacing all of those and densifying into one rack. The benefit of densifying also is now that you've densified it, you can liquid cool it because it's hard to liquid cool a data center that's very large, but you can liquid cool the data center that's very small. And so the first thing that we're doing is accelerating, modernizing data centers, accelerating it, densifying it, making it more energy efficient. You save money, you save power, you save -- much more efficient. That's the first -- if we just focused on that, that's the next 10 years, we'll just accelerate that. Now, of course, there's a second dynamic, is because of NVIDIA's accelerated computing brought such enormous cost reductions to computing, it's like in the last 10 years, instead of Moore's Law being 100x, we scaled computing by 1000000x in the last 10 years. And so the question is, what would you do different if your plane traveled a million times faster? What would you do different if -- and so all of a sudden, people said, hey, listen, why don't we just use computers to write software? Instead of us trying to figure out what the features are, instead of us trying to figure out what the algorithms are, we'll just give the data, all the data -- all the predictive data to the computer and let it figure out what the algorithm is. Machine learning, Generative AI. And so we did it in such large scale on so many different data domains that now computers understand not just how to process the data, but the meaning of the data. And because it understands multiple modalities, at the same time, it can translate data. And so it can go from English to images, images to English, English to proteins, proteins to chemicals. And so because it understood all of the data at one time, it can now do all this translation. We call it Generative AI. Large amount of text into small amount of text, small amount of text into large amount of text, and so on and so forth. We're now in this computer revolution. And now, what's amazing is, so the first trillion dollars of data centers is going to get accelerated and invented this new type of software called Generative AI. This Generative AI is not just a tool, it is a skill. And so this is the interesting thing. This is why a new industry has been created. And the reason for that is, if you look at the whole IT industry, up until now, we've been making instruments and tools that people use. For the very first time, we're going to create skills that augment people. And so that's why people think that AI is going to expand beyond the trillion dollars of data centers and IT, and into the world of skills. So what's a skill? A digital chauffeur is a skill, autonomous, a digital assembly line worker, robot, a digital customer service, chatbot, digital employee for planning NVIDIA's supply chain. It could be a -- that would be somebody that's a digital SAP agent. There is a -- we use a lot of service now in our companies and we have digital employee service. And so now we have all these digital humans, essentially. And that's the wave of AI that we're in now. Toshiya Hari So, step back, shift a little. Based on everything you just said, there's definitely an ongoing debate in financial markets as to whether or not, as we continue to build this AI infrastructure, there is an adequate return on investment. How would you assess customer ROI at this point in the cycle? And if you look back and you kind of think about PCs, cloud computing, when they were at similar points in their adoption cycles, how did the ROIs look then compared to where we are now as we continue to scale? Jensen Huang Yeah, fantastic. So, let's take a look. Before cloud, the major trend was virtualization, if you guys remember that. And virtualization basically said, let's take all of the hardware we have in the data center, let's virtualize it into essentially virtual data center, and then we could move workload across the data center instead of associating it directly to a particular computer. As a result, the tendency and the utilization of that data center improved. And we saw essentially a 2 to 1 -- 2.5 to 1, if you will, cost reduction in data centers overnight, virtualization. The second thing that we then said was after we virtualized that, we put those virtual computers right into the cloud. As a result, multiple companies, not just one company's many applications, multiple companies can share the same resource, another cost reduction, the utilization again went up. By the way, this last 10 years of all this -- 15 years of all this stuff happening, masked the fundamental dynamic which was happening underneath which is Moore's Law ending. We found a 2x -- another 2x in cost reduction, and it hid the end of the transistor scaling. It hid the transistor, the CPU scaling. Then all of a sudden, we already got the utilization cost reductions out of both of these things. We're now out. And that's the reason why we see data center and computing inflation happening right now. And so the first thing that's happening is accelerated computing. And so it's not uncommon for you to take your data processing work, and we -- there's a thing called Spark. If you -- anyone who've used -- Spark is probably the most used data processing engine in the world today. If you use Spark and you accelerate it with NVIDIA in the cloud, it's not unusual to see a 20 to 1 speed-up. And so you're going to save 10 -- and you pay, of course, you got it, the NVIDIA GPU augments the CPU, so the computing cost goes up a little bit. It goes -- maybe it doubles, but you reduce the computing time by about 20 times. And so you get a 10x savings. And it's not unusual to see this kind of ROI for accelerated computing. So I would encourage all of you, everything that you can accelerate -- to accelerate, and then once you accelerate it, run it with GPUs. And so that's the instant ROI that you get by acceleration. Now, beyond that, the Generative AI conversation is in the first wave of GenAI, which is where the infrastructure players like ourselves and all the cloud service providers put the infrastructure in the cloud so that developers could use these machines to train the models and fine-tune the models, guardrail the models, so on and so forth. And the return on that is fantastic because the demand is so great that for every dollar that they spend with us translates to $5 worth of rentals. And that's happening all over the world, and everything is all sold out. And so the demand for this is just incredible. Some of the applications that we already know about, of course, the famous ones, OpenAI's ChatGPT or GitHub Copilot, or code generators that we use in our company, the productivity gains are just incredible. There's not one software engineer in our company today who don't use code generators either the ones that we built ourselves for CUDA or USD, which is another language that we use in the company, or Verilog, or C and C++ and code generation. And so I think the days of every line of code being written by software engineers, those are completely over. And the idea that every one of our software engineers would essentially have companion digital engineers working with them 24/7, that's the future. And so the way I look at NVIDIA, we have 32,000 employees. Those 32,000 employees are surrounded by hopefully 100x more digital engineers. Toshiya Hari Sure. Lots of industries embracing this. What cases -- use cases, industries are you most excited about? Jensen Huang Well, in our company, we use it for computer graphics. We can't do computer graphics anymore without artificial intelligence. We compute one pixel, we infer the other 32. I mean, it's incredible. And so we hallucinate, if you will, the other 32, and it looks temporally stable, it looks photorealistic, and the image quality is incredible, the performance is incredible, the amount of energy we save -- computing one pixel takes a lot of energy. That's computation. Inferencing the other 32 takes very little energy, and you can do it incredibly fast. So one of the takeaways there is AI isn't just about training the model, of course, that's just the first step. It's about using the model. And so when you use the model, you save enormous amounts of energy, you save enormous amount of time -- processing time. So we use it for computer graphics. We -- if not for AI, we wouldn't be able to serve the autonomous vehicle industry. If not for AI, the work that we're doing in robotics, digital biology, just about every tech bio company that I meet these days are built on top of NVIDIA, and so they're using it for data processing or generating proteins for... It's incredible. Small molecule generation, virtual screening. I mean, just that whole space is going to get reinvented for the very first time with computer-aided drug discovery because of artificial intelligence. So, incredible work being done there. Toshiya Hari Yeah. Talk about competition, talk about your competitive moat. There's certainly group, public, and private companies looking to disrupt your leadership position. How do you think about your competitive moat? Jensen Huang Well, first of all, I think the -- I would say several things that are very different about us. The first thing is to remember that AI is not about a chip. AI is about an infrastructure. Today's computing is not build a chip and people come by your chips, put it into a computer, that's really kind of 1990s. The way that computers are built today, if you look at our new Blackwell system, we designed seven different types of chips to create the system. Blackwell is one of them. Yeah. And so the amazing thing is, when you want to build this AI computer, people say words like super-cluster, infrastructure, supercomputer, for good reason because it's not a chip, it's not a computer per se. And so we're building entire data centers. By building the entire data center, if you just ever look at one of these superclusters, imagine the software that has to go into it to run it. There is no Microsoft Windows for it. Those days are over. So all the software that's inside that computer is completely bespoke. Somebody has to go write that. So the person who designs the chip and the company that designs that supercomputer, that supercluster and all the software that goes into it, it makes sense that it's the same company because it will be more optimized, they'll be more performant, more energy efficient, more cost effective. And so that's the first thing. The second thing is, AI is about algorithms. And we're really, really good at understanding what is the algorithm, what's the implication to the computing stack underneath and how do I distribute this computation across millions of processors, run it for days on date -- days on in, with the computer being as resilient as possible, achieving great energy efficiency, getting the job done as fast as possible, so on and so forth. And so we're really, really good at that. And then lastly, in the end, AI is computing. AI is software running on computers. And we know that the most important thing for computers is install base, having the same architecture across every cloud across on-prem to cloud, and having the same architecture available, whether you're building it in the cloud, in your own supercomputer, or trying to run it in your car or some robot or some PC, having that same identical architecture that runs all the same software is a big deal. It's called install base. And so the discipline that we've had for the last 30 years has really led to today. And it's the reason why the most obvious architecture to use if you were to start a company is to use NVIDIA's architecture. Because we're in every cloud, we're anywhere you like to buy it. And whatever computer you pick up, so long as it says NVIDIA inside, you know you can take the software and run it. Toshiya Hari Yeah. You're innovating at an incredibly fast pace. I want you to talk a little bit more about Blackwell. Four times faster on training, 30 times faster inference than its predecessor Hopper. It just seems like you're innovating at such a quick pace. Can you keep up this rapid pace of innovation? And when you think about your partners, how do your partners keep up with the pace of innovation you're delivering? Jensen Huang The pace of innovation, our basic methodology is to take -- because, remember, we're building an infrastructure. There's seven different chips. Each chip's rhythm is probably, at best, two years. At best, two years. We could give it a midlife kicker every year. But architecturally, if you're coming up with a new architecture every two years, you're running at the speed of light, okay? You're running insanely fast. Now, we have seven different chips, and they all contribute to the performance. And so we could innovate and bring a new AI cluster, a supercluster, to the market every single year that's better than the last generation because we have so many different pieces to work around. And so -- and the benefit of performance at the scale that we're doing, it directly translates the TCO. And so when Blackwell is three times the performance for somebody who has a given amount of power, say, 1 gigawatt, that's three times more revenues. That performance translates to throughput. That throughput translates to revenues. And so for somebody who has a gigawatt of power to use, you get three times the revenues. There's no way you can give somebody a cost reduction or discount on chips to make up for three times the revenues. And so the ability for us to deliver that much more performance through the integration of all these different parts and optimizing across the whole stack, and optimizing across the whole cluster, we can now deliver better and better value at much higher rates. The opposite of that is equally true. For any amount of money you want to spend, so for ISO power, you get three times the revenues. For ISO spend, you get three times the performance, which is another way of saying cost reduction. And so we have the best perf per watt, which is your revenues. We have the best perf per TCO, which means your gross margins. And so we keep pushing this out to the marketplace. Customers get to benefit from that not once every two years. And it's architecturally compatible. And so the software you developed yesterday will run tomorrow. The software you develop today will run across your entire install base. So we could run incredibly fast. If every single architecture was different, then you can't do this. It takes a year just to cobble together a system because we built everything together the day we ship it to you and it's pretty famous, somebody tweeted out that in 19 days after we shipped systems to them, they had a supercluster up and running, 19 days. You can't do that if you were cobbling together all these different chips and writing the software, you'll be lucky if you could do it in a year. And so I think our ability to transfer our innovation pace to customers, getting more revenues, getting better gross margins, that's a fantastic thing. Toshiya Hari The majority of your supply chain partners operate out of Asia, particularly Taiwan. Given what's going on geopolitically, how you're thinking about that as you look forward? Jensen Huang Yeah, the Asia supply chain, as you know, is really, really sprawling and interconnected. People think that when we say GPUs, because a long time ago, when I announced a new chip, a new generation of chips, I would hold up the chip. And so that was a new GPU. NVIDIA's new GPUs are 35,000 parts, weighs 80 pounds, consumes 10,000 amps. When you rack it up, it weighs 3,000 pounds. And so these GPUs are so complex, it's built like an electric car, components like an electric car. And so the ecosystem is really diverse and really interconnected in Asia. We try to design diversity and redundancy into every aspect, wherever we can. And then the last part of it is to have enough intellectual property in our company. In the event that we have to shift from one fab to another, we have the ability to do it. Maybe the process technology is not as great, maybe we won't be able to get the same level of performance or cost, but we will be able to provide the supply. And so I think the -- in the event anything were to happen, we should be able to pick up and fab it somewhere else. We're fabbing at a TSMC because it's the world's best and it's the world's best not by a small margin, it's the world's best by an incredible margin. And so not only just the long history of working with them, the great chemistry, their agility, the fact that they could scale, remember NVIDIA's last year's revenue had a major hockey stick. That major hockey stick wouldn't have been possible if not for the supply chain responding. And so the agility of that supply chain, including TSMC, is incredible. And in just less than a year, we've scaled up CoWoS capacity tremendously, and we're going to have to scale it up even more next year and scale up even more the year after that. But nonetheless, the agility and their capability to respond to our needs is just incredible. And so we use them because they're great, but if necessary, of course, we can always bring up others. Toshiya Hari Yeah. Company is incredibly well-positioned. A lot of great stuff we've talked about. What do you worry about? Jensen Huang Well, our company works with every AI company in the world today. We're working with every single data center in the world today. I don't know, one data center, one cloud service provider, one computer maker we're not working with. And so what comes with that is enormous responsibility. And we have a lot of people on our shoulders, and everybody's counting on us. And demand is so great that delivery of our components and our technology and our infrastructure and software is really emotional for people because it directly affects their revenues, it directly affects their competitiveness. And so we probably have more emotional customers today than -- and deservedly so. If we could fulfill everybody's needs, then the emotion would go away. But it's very emotional, it's really tense. We've got a lot of responsibility on our shoulder, and we're trying to do the best we can. And here we are ramping Blackwell and it's in full production. We'll ship in Q4 and scale it -- start scaling in Q4 and into next year. And the demand on it is so great. And everybody wants to be first, and everybody wants to be most, and everybody wants to be -- and so the intensity is really, really quite extraordinary. And so I think it's fun to be inventing the next computer era. It's fun to see all these amazing applications being created. It's incredible to see robots walking around. It's incredible to have these digital agents coming together as a team, solving problems in your computer. It's amazing to see the AIs that we're using to design the chips that will run our AIs. All of that stuff is incredible to see. The part of it that is just really intense is just the world on our shoulders. And so less sleep is fine and three solid hours, that's all we need. Toshiya Hari Well, good for you. I need more than that. I could spend another half-hour. Unfortunately, we've got to stop. Jensen, thank you very much thank you for being here and chatting with us today.
Share
Share
Copy Link
ServiceNow sets ambitious growth targets, Digital Realty Trust addresses data center demand, and Dell Technologies highlights AI and as-a-service trends in recent conference calls.
ServiceNow, a leading cloud computing platform, has set its sights on becoming the world's largest enterprise software company by 2030. During a recent earnings call, CEO Bill McDermott outlined the company's strategy to achieve this goal. ServiceNow plans to leverage its strong position in IT service management and expand into new markets such as customer service, HR service delivery, and security operations 1.
At the Goldman Sachs Communacopia Technology Conference, ServiceNow's management further elaborated on their growth strategy. The company is focusing on expanding its total addressable market (TAM) through innovation and strategic partnerships. They emphasized the importance of AI-driven automation and the potential for significant productivity gains across various industries 2.
Digital Realty Trust, a major player in the data center industry, shared insights on market trends during recent conference calls. At the Bank of America 2024 Global Real Estate Conference, the company discussed the growing demand for data center capacity, driven by AI and cloud computing advancements. Digital Realty highlighted its strategic positioning to capitalize on these trends, particularly in key markets across North America, Europe, and Asia-Pacific 3.
During the Goldman Sachs Communacopia Technology Conference, Digital Realty's management provided additional context on the company's expansion plans and the impact of AI on data center requirements. They noted the increasing importance of power density and cooling solutions to support AI workloads, positioning the company to meet evolving customer needs 4.
Dell Technologies, a leading provider of IT infrastructure and solutions, shared its strategic priorities at the Goldman Sachs Communacopia Technology Conference. The company emphasized its commitment to AI-enabled products and services, recognizing the transformative potential of AI across various industries. Dell highlighted its partnerships with NVIDIA and other technology leaders to deliver comprehensive AI solutions to customers 5.
Additionally, Dell discussed the growing importance of as-a-service consumption models in the enterprise IT space. The company is expanding its APEX portfolio, which offers flexible, consumption-based IT solutions to meet evolving customer preferences for OpEx-oriented spending. This shift aligns with broader industry trends towards more agile and scalable IT infrastructure deployments.
The conference calls from ServiceNow, Digital Realty Trust, and Dell Technologies reveal a convergence of trends in the enterprise technology landscape. AI adoption, cloud computing growth, and the shift towards as-a-service models are driving significant changes across the industry. As these companies pursue their respective strategies, they are positioning themselves to capitalize on these trends and shape the future of enterprise IT.
Reference
[1]
[2]
[3]
[4]
Cardinal Health, IBM, HPE, UiPath, and Fiverr executives discuss company strategies, market trends, and future outlooks at recent industry conferences.
9 Sources
Seagate Technology and Dell Technologies executives share insights on AI, data storage, and market dynamics at Citi's 2024 Global TMT Conference. Both companies highlight the impact of AI on their businesses and future strategies.
2 Sources
AMD's CEO Lisa Su and Super Micro Computer's CEO Charles Liang share insights on AI acceleration, data center growth, and product strategies at the Goldman Sachs Communacopia and Technology Conference.
2 Sources
eGain, Lantronix, and Yext release their latest quarterly earnings reports, showcasing varying results in revenue, AI initiatives, and market performance. This summary provides an overview of each company's financial performance and strategic developments.
8 Sources
Datadog, a leading monitoring and security platform, recently discussed its strategic direction and AI integration efforts during an earnings call and at the Goldman Sachs Communacopia Technology Conference. The company emphasized its focus on observability, security, and cost optimization while highlighting the growing importance of AI in its product offerings.
2 Sources