Curated by THEOUTPOST
On Sat, 24 Aug, 12:01 AM UTC
4 Sources
[1]
Nvidia: Buy Any Dip Before The Earnings Report (NASDAQ:NVDA)
We discuss in a simplified manner how the tech industry is progressing in challenging Nvidia's moat, and the stock price potential ahead. NVIDIA Corporation (NASDAQ:NVDA) stock saw a steep correction of around 27% recently, though it has now recovered most of those losses. The reversal of the yen carry trade and the delay in Blackwell GPU shipping in addition to top customers already seeking alternatives to Nvidia's chips has certainly emboldened the bears to call the top for NVDA. And with the Q2 earnings release in a few days, investors will be keenly looking out for any impact on guidance for the rest of 2024, and 2025. Though for investors who are concerned about the Blackwell delay impacting sales growth projections for the quarters ahead, it is important to understand the CUDA software moat that should sustain demand for Nvidia's GPUs regardless of minor setbacks. The real battleground is on the software side, with Nvidia leveraging the value proposition and stickiness of the CUDA platform to keep customers entangled in its ecosystem. While catching up to CUDA will be a steep uphill battle for challengers, it does potentially plant the seeds of weakening pricing power for Nvidia going forward. Nonetheless, Nvidia's moat is still safe for a few more years. In the previous article, we extensively discussed in a simplified manner how Nvidia is beautifully positioned to earn great streams of recurring revenue through its "NVIDIA AI Enterprise" software service. The company is striving to make it the core operating system for generative AI-powered software applications in this new era. Moreover, we also covered how the risk of cloud service providers designing their own chips is subdued by Nvidia increasingly building AI factories (private data centers) for enterprises directly, locking companies into its ecosystem and reducing enterprises' need to migrate to the cloud. So with enterprises increasingly making up a larger proportion of Nvidia's data center sales mix, and the tech giant set to generate large sums of recurring software revenue going forward, the bull case remains intact. Matter of fact, on the earnings call in a few days, I expect CEO Jensen Huang to particularly emphasize the growing demand from enterprise customers. On the Q1 FY2025 Nvidia earnings call a few months ago, Huang proclaimed "In Q1, we worked with over 100 customers building AI factories." This has become an increasingly important source of data center revenue for Nvidia, sustaining demand for its GPUs beyond just the Cloud Service Providers (CSPs). I am maintaining a 'buy' rating on NVDA. That being said, it is still significant to cover the broader tech sector's efforts to build alternatives to Nvidia's platform, and how this tests the tech giant's moat. Markets tend to price in future events before they materialize and to enable investors to make more informed investment decisions for the long term. The delay in the shipping of Blackwell GPUs due to certain design flaws intensified the recent correction in the stock price. The fear is that such delays could create windows for rivals like AMD and Intel to capture some market share away from Nvidia. Nonetheless, these fears have subdued somewhat amid demand for Nvidia's previous generation chips, the Hopper series, remaining strong, with a note from UBS analyst Timothy Arcuri pointing out that: There is sufficient headroom and short enough lead time in H100/H200 supply for Nvidia to largely backfill lower initial Blackwell volumes with H200 units and the supply chain is seeing upward revisions consistent with some backfill, as per the analysts. So on the earnings call next week, expect CEO Jensen Huang and CFO Colette Kress to repeatedly emphasize the continuous strong demand for H100/H200s until Blackwell starts shipping. The revenue bump from the higher-priced Blackwell GPUs will simply be pushed out to the first quarter of 2025. The guidance for the rest of 2024 should continue to reflect strong sales growth, as tech companies continue to build out their AI infrastructure regardless of the Blackwell delay. That being said, the high price tag of Nvidia's GPUs has induced customers and challengers to seek, or even build their own alternatives, posing a potential risk to future demand for Nvidia's chips. It's no secret that Nvidia's top cloud customers, Microsoft Corporation (MSFT) Azure, Amazon.com, Inc.'s (AMZN) AWS, and Alphabet Inc. (GOOG) (GOOGL) Cloud are designing their own chips, and have been encouraging their customers to opt for their in-house hardware over Nvidia's GPUs. A prime reason why Nvidia's chips remain in high demand among cloud customers is the CUDA platform built around its GPUs. It boasts an extensive base of developers and thousands of applications that significantly extend the capabilities of the GPUs. Software developers from organizations, as well as independent developers, continue to build around the CUDA platform to optimally program the GPUs for various kinds of computing workloads. The expanding versatility and stickiness of this software layer is what enables Nvidia to keep customers entangled in the ecosystem and sustain demand for its expensive GPUs. Moreover, Nvidia boasts a broad range of domain-specific CUDA libraries that are oriented towards serving each particular economic sector. In fact, at Nvidia's COMPUTEX event in June 2024, CEO Jensen Huang proclaimed that: These domain-specific libraries are really the treasure of our company. We have 350 of them. These libraries are what it takes, and what has made it possible for us to have opened so many markets... We have a library for AI physics that you could use for fluid dynamics and many other applications where the neural network has to obey the laws of physics. We have a great new library called Ariel that is a CUDA accelerated 5G radio, so that we can software define and accelerate the telecommunications networks. The point is, CUDA's functionalities are broad and extensive, used across numerous sectors and industries, making it difficult for challengers to penetrate its moat in a short space of time. But that certainly has not stopped the company's largest customers from trying. Google Cloud was the first cloud provider to offer its own TPUs to enterprises in 2018 and boasts AI startups like Midjourney and character.ai as users of these chips. Although most notably, last month it was reported that Apple used Google's TPUs to train its AI models as part of Apple Intelligence. This is a massive win for Google. Not just because it positions its chips as a viable alternative to Nvidia's GPUs, but also because it could accelerate the development of a software ecosystem around its chips. In this process, third-party developers build more and more tools and applications around its TPUs that extend its capabilities, with Apple's utilization serving as an affirmation that this hardware is worth building upon. Such grand-scale use by one of the largest tech companies in the world should indeed help Google convince more enterprises to use its TPUs over Nvidia's GPUs. This undermines the extent to which Google Cloud will continue buying GPUs from Jensen Huang's company in the future or the volume/frequency at which it upgrades to newer generation chips from Nvidia. And then there is AWS, which has also been offering its semiconductors for training and inferencing AI models, offering the Trainium chips since 2021, and the Inferentia chips since 2019. Moreover, on the Amazon Q2 2024 earnings call, CEO Andy Jassy emphasized the value proposition of its in-house chips over Nvidia's GPUs: We have a deep partnership with NVIDIA and the broadest selection of NVIDIA instances available, but we've heard loud and clear from customers that they relish better price performance. It's why we've invested in our own custom silicon in Trainium for training and Inferentia for inference. He certainly makes a compelling argument here on the price-performance benefits, given that the company's in-house chips can be more deeply integrated with other AWS hardware and infrastructure to drive more performance efficiencies. In fact, using an example of AI startup NinjaTech (an AWS customer), the computing cost difference between using Amazon's in-house chips and Nvidia's GPUs could reportedly be huge: To serve more than a million users a month, NinjaTech's cloud-services bill at Amazon is about $250,000 a month, ... running the same AI on Nvidia chips, it would be between $750,000 and $1.2 million. Nvidia is keenly aware of all this competitive pressure, and that its chips are costly to buy and operate. Huang, its CEO, has pledged that the company's next generation of AI-focused chips will bring down the costs of training AI on the company's hardware. Nvidia's aggressive pricing strategy is under pressure. Top cloud customers are luring enterprises into using their custom chips over Nvidia's hardware by highlighting cost advantages. They are thereby simultaneously inducing companies' developers to build software tools and applications around these in-house chips in a bid to catch up to CUDA. At Nvidia's COMPUTEX event a few months ago, Jensen Huang emphasized that while Nvidia's GPUs carry high-price tags, the company's ecosystem benefits from a powerful network effect that continuously drives down the cost of computing on its chips: Because there are so many customers for our architecture, OEMs and cloud service providers are interested in building our systems... which of course creates greater opportunity for us, which allows us to increase our scale, R&D scale which speeds up the application even more. Well, every single time we speed up the application, the cost of computing goes down. ... 100X speed up translates to 96%, 97%, 98% savings. And so when we go from 100X speed up to 200X, speed up to 1000X, speed up the savings, the marginal cost of computing continues to fall. Incredibly, Nvidia's virtuous network effect allows it to drive down the cost of computing on its GPUs. This is so more workloads can be run more cost-efficiently on its chips, allowing customers to save on training/ inferencing costs over the long term. Until now, Nvidia capitalized on this network effect by charging higher prices for its hardware devices, citing the longer-term cost benefits. Though now with competition and alternatives ramping up, Nvidia's pricing power could inevitably diminish, subduing the extent to which it can charge premium prices for its GPUs, despite its CUDA moat. In fact, Nvidia has reportedly already lowered the price of its GPUs in response to increased competition from AMD. Moreover, in a previous article, we had extensively covered AMD's strategy (and benefits) of catching up to Nvidia's closed-source CUDA platform through open-sourcing its own ROCm software platform. This was a bid to attract more developers to build tools and applications around its own GPUs. And it's not just AMD that is leveraging the power of open-source to challenge Nvidia's moat, as reportedly: The enormous size of the market for AI computing has encouraged an array of companies to unite to take on Nvidia. ... Much of this collaboration is focused on developing open-source alternatives to CUDA, says Bill Pearson, an Intel vice president focused on AI for cloud-computing customers. Intel engineers are contributing to two such projects, one of which includes Arm, Google, Samsung, and Qualcomm. OpenAI, the company behind ChatGPT, is working on its own open-source effort. ...Investors are piling into startups working to develop alternatives to CUDA. Those investments are driven in part by the possibility that engineers at many of the world's tech behemoths could collectively make it possible for companies to use whatever chips they like and stop paying what some in the industry call the "CUDA tax. So the efforts to dethrone Nvidia's CUDA are certainly intensifying. While building a software platform/ ecosystem commensurate against CUDA will take time, it could certainly undermine Nvidia's pricing power going forward. In fact, we even have a historical analogy portraying how the rise of open-source technology subdued pricing power during the computing revolution, as Goldman Sachs research argued: AI technology is undoubtedly expensive today. And the human brain is 10,000x more effective per unit of power in performing cognitive tasks vs. generative AI. But the technology's cost equation will change, just as it always has in the past. In 1997, a Sun Microsystems server cost $64,000. Within three years, the server's capabilities could be replicated with a combination of x86 chips, Linux technology, and MySQL scalable databases for 1/50th of the cost. And the scaling of x86 chips coupled with open-source Linux, databases, and development tools led to the mainstreaming of AWS infrastructure. Similarly, this time round, the tech sector is eager to invest in and develop alternatives to Nvidia's platform, with a particular emphasis on open-source software. One shouldn't underestimate the power of open-source technology, given how it gave rise to new computing giants like AWS. Challengers are seeking to repeat this history now in the generative AI era, refusing to continue paying premium prices to access Nvidia's ecosystem of AI solutions. Now, while the endeavors to challenge Nvidia's CUDA moat should certainly not be taken too lightly, it is undoubtedly a steep uphill battle. This is simply by virtue of the fact that it has taken the market leader almost two decades to build this moat, and reportedly (emphasis added): Year after year, Nvidia responded to the needs of software developers by pumping out specialized libraries of code, allowing a huge array of tasks to be performed on its GPUs...The importance of Nvidia's software platforms explains why for years Nvidia has had more software engineers than hardware engineers on its staff. Moreover, the size of Nvidia's CUDA ecosystem has continued to grow quarter after quarter, boasting 5.1 million developers and 3,700 GPU-accelerated applications in Q1 FY2025, up from 4 million developers and 3,200 applications in the prior quarter. And it is safe to assume that this ecosystem will have grown even larger over Q2 FY2025, with new statistics expected to be released alongside their earnings report next week. Furthermore, CEO Jensen Huang took a swipe at challengers at the COMPUTEX event in June, explaining the difficulty of cracking Nvidia's moat: Creating a new platform is extremely hard because it's a chicken and egg problem. If there are no developers that use your platform, then of course there will be no users. But if there are no users, there are no installed base. If there are no installed base, developers aren't interested in it. Developers want to write software for a large installed base, but a large install base requires a lot of applications so that users would create an installed base. This chicken or the egg problem has rarely been broken, and it's taken us 20 years. One domain library after another, one acceleration library after another. And now we have 5 million developers around the world. We serve every single industry from health care, financial services, of course, the computer industry, automotive industry. Just about every major industry in the world Indeed, developers are attracted by high earnings potential. And right now, Nvidia is the only tech giant that has a large enough installed base for independent software developers to be able to make a living from selling tools and services through the CUDA platform. Consequently, the CUDA developer base continues to rapidly grow, adding over a million developers in just one quarter, continuously steepening the uphill slope for challengers to catch up. It is also worth noting that rivals striving to build a software platform similar to CUDA are barely a threat to Nvidia until they get their execution right. Take Google Cloud as an example, for whom building a software ecosystem around its TPUs hasn't been smooth sailing reportedly: Google has had mixed success opening access to outside customers. While Google has signed up prominent startups including chatbot maker Character and image-generation business Midjourney, some developers have found it difficult to build software for the chips. Execution matters. It's one thing to embark on building a software platform, it's another thing to get it right. This is not to imply that these CSPs and other challengers are not making any progress in building out their software ecosystems, but their struggles are a testament to Nvidia's decades' worth of work to build a proficient platform. Furthermore, even as rivals progress in building out the software ecosystems around their in-house chips, convincing enterprises to migrate away from Nvidia GPUs is not that easy, as Amazon Web Services executive Gadi Hutt reportedly admitted (emphasis added): Customers that use Amazon's custom AI chips include Anthropic, Airbnb, Pinterest, and Snap. Amazon offers its cloud-computing customers access to Nvidia chips, but they cost more to use than Amazon's own AI chips. Even so, it would take time for customers to make the switch, says Hutt. This is a testament to just how sticky Nvidia's CUDA platform is. Once a company's developers are accustomed to powering their workloads using CUDA-based tools and applications, it can be time-consuming, energy-consuming, and simply inconvenient to migrate to a different software platform. It is also reflective of the immensely broad range of capabilities available through CUDA, requiring developers to conduct comprehensive due diligence to ensure adequately similar capabilities are available through rival platforms. Now, challengers will certainly strive to build more and more tools and services that ease the migration from Nvidia's GPUs/ CUDA to their own AI solutions, but it will take time for these competitors to commensurately catch up. Now for Q2 FY2025, Nvidia has guided $28 billion in total revenue, which would imply a year-over-year growth rate of 107%. While most market participants believe Nvidia should be able to handily beat this number, the real focus will be on the guidance for the rest of 2024 and commentary around the Blackwell delay. This is what will drive post-earnings stock price action. As long as Nvidia's executives can tell analysts and investors on the earnings call that the Blackwell issue has been resolved, or is very close to being resolved, and can offer a visible timeline for starting to ship the GPUs at the end of this year/ beginning next year, then the stock price should be able to continue rallying higher following the steep correction recently. Moreover, despite the delay, executives should be able to satisfy investors by offering strong sales guidance for the rest of 2024, given the bullish commentary offered by CEOs of Nvidia's top customers. For instance, Google CEO Sundar Pichai shared on the Alphabet Q2 2024 earnings call that: I think the one way I think about it is when we go through a curve like this, the risk of under-investing is dramatically greater than the risk of over-investing for us here, even in scenarios where if it turns out that we are over-investing. We clearly -- these are infrastructure, which are widely useful for us. They have long useful lives and we can apply it across, and we can work through that. And we saw the same mindset from CEO Mark Zuckerberg on the Meta Q2 2024 earnings call: at this point, I'd rather risk building capacity before it is needed rather than too late, given the long lead times for spinning up new inference projects. So, in case you are hesitant about buying NVDA ahead of the earnings report amid the Blackwell delay and the potential impact on guidance, one can be confident that demand for the H100/H200 GPUs should remain sustainably high. This is despite top customers still building out their AI infrastructure. And as discussed throughout the article, the powerfully versatile CUDA platform that accompanies Nvidia's GPUs should keep customers entangled in the ecosystem, while it could still take years for challengers to catch up to this software moat. In fact, former Google CEO Eric Schmidt recently gave a talk at Stanford University and emphasized just how much the leading AI companies will need to spend on AI hardware in the coming years. This signals more growth ahead for Nvidia, with Eric Jackson (founder and portfolio manager of EMJ Capital), saying that: Talking to Sam Altman, they [Schmidt and Altman] both believed that each of the hyper scalers and the OpenAI's of the world would probably have to spend $300 billion each over the next few years, so Nvidia could basically only supply those hyper scalers, and their order book will be full for the next four years. Now various experts could project differing CapEx numbers, but the key point is, demand for Nvidia's GPUs will remain elevated over the next couple of years during this AI infrastructure build-out phase. You have to ask, despite these hyper-scalers each boasting about the price-performance capabilities of their custom in-house chips, and AI start-ups also creating alternatives to Nvidia, why are top customers still purchasing so much AI hardware from Nvidia? It is simply a testament to just how far ahead Nvidia's GPUs are in performance and the value proposition of the sticky CUDA software platform that deeply extends the capabilities of these chips. This is one of the key reasons NVDA stock will continue performing strongly from here. That being said, as competition inevitably ramps up, Nvidia's pricing power could indeed diminish over time, as discussed earlier. So while the tech giant currently boasts gross profit margins of 70%+ and net margins of 50%+, expect margins to somewhat compress going forward as Nvidia lowers prices of its GPUs to discourage migrations to rivals' AI solutions. Now, any hint of softer pricing for next-generation GPUs in the future could induce "weak hands" to sell out of the stock, conducive to a steep correction. Though for long-term investors, this would present a beautiful buying opportunity. At present, Nvidia is right to charge premium prices for its GPUs. This is given its wide market leadership in the absence of meaningful competition, as it gives the company the opportunity to invest more in R&D to sustain its leadership, as well as build up a massive cash reserve. But even when Nvidia is compelled to lower prices in the future as competition catches up, this would be the right move. The tech giant's main goal should be to sustain a large installed base of its hardware, upon which it can earn rent in the form of recurring software revenue. We discussed extensively the growth opportunities through the "NVIDIA AI Enterprise" software operating system in the previous article, whereby Nvidia stands to generate $4,500 per GPU annually from enterprises as they run their generative AI-powered software applications in this new era. So any potential margin compressions from weakening pricing power for GPUs should be recouped from higher-margin software revenue, making up a larger portion of the tech behemoth's revenue base. As discussed in a prior article, Nvidia's recurring software revenue is expected to make up 30% of total revenue going forward, which should be supportive of profit margins, as well as stock price performance going forward. Now in terms of Nvidia stock's valuation, it is currently trading at around 47x forward earnings, in line with its 5-year average multiple. Though amid the recent 27% correction in the stock price, NVDA was trading at around 36x earlier this month, creating a beautiful opportunity for dip buyers. Nonetheless, when we adjust the current Forward P/E multiple by the expected EPS FWD Long-Term Growth (3-5Y CAGR) of 36.60%, we obtain a Forward Price-Earnings-Growth [PEG] multiple of 1.28x. This is significantly below its 5-year average of 2.10x. For context, a Forward PEG multiple of 1x would reflect a stock trading at fair value. Though, it is common for popular tech stocks to command multiples of above 2x, with the market assigning a premium valuation based on factors such as "broad competitive moats" and "strength of balance sheet." For example, Microsoft stock currently trades at a Forward PEG ratio of 2.39x. Coming back to NVDA, even after the stock recovered from its recent selloff, a Forward PEG of 1.28x is still a very attractive multiple to pay for a company with such a powerful moat around its leadership in the AI era. That being said, even in the run-up to the earnings report in a few days, the stock could continue to face volatility. "Weak hands" could continue to sell out of the stock on short-term developments like the Blackwell delay. In fact, on August 22nd, the stock had declined by almost 4% again. However, long-term investors that understand the wide lead of its CUDA platform, and the software growth opportunities ahead, should not miss any dip buying opportunities in the stock as we approach the earnings release. NVDA maintains its "buy" rating.
[2]
Nvidia: Despite Megatrends, Discipline Warranted (NASDAQ:NVDA)
Looking for a helping hand in the market? Members of Big Dividends PLUS get exclusive ideas and guidance to navigate any climate. Learn More " If you are like just about every single person I talk to, you own shares of Nvidia (NASDAQ:NVDA). And you probably love to tell your story of how you got in early, or how you should have bought more. But what about Nvidia's valuation? And what about your "position sizing" of Nvidia within your aggregate investment portfolio? After reviewing Nvidia's business, megatrend growth trajectory and risks, we discuss its current valuation and prudent position sizing within your personal investment portfolio. We conclude with our strong opinion about owning Nvidia shares ahead of the upcoming earnings announcement. Nvidia makes semiconductors or "chips." Specifically, the company makes graphics processing units ("GPUs") which have proven superior to traditional computer processing units ("CPUs") in a wide variety of applications ranging from video games, to cloud computing (data centers), cryptocurrency mining and now artificial intelligence. And as the company continues to innovate and grow, it has evolved into an accelerated computing platform with a full stack approach encompassing silicon (chips), systems and software. Nvidia has dramatically advanced "compute" power in a relatively short period of years (see graphic above), thereby opening a dramatic new world of accelerated computing possibilities. And more immediately, Nvidia's ramping Blackwell chips (see image above), combined with liquid cooling, will be a next step in accelerated computing for GPUs. More specifically, the liquid cooled A100 will be readily available in Q3 and a liquid cooled H100 should be available in early 2025. It would be naive to suggest that it's time to sell Nvidia simply because its share price and market capitalization have risen so dramatically. Because what was simply a $5 billion market cap company (on the verge of bankruptcy) just 15 years ago is now one of the largest companies on the planet, and it has truly massive market opportunities ahead. To some people, "the cloud" is this mysterious place where all your personal computer data and email messages are magically stored in thin air (instead of on your personal computer hard drive like they were years ago). In reality, the cloud is really just massive computer data centers where everyone's data (people, businesses, governments) is now stored, and this is a megatrend that is still just getting started. Worldwide spending on public cloud services is forecast to reach $805 billion in 2024 and double in size by 2028, according to the latest update to the International Data Corporation (IDC) Worldwide Software and Public Cloud Services Spending Guide. Although annual spending growth is expected to slow slightly over the 2024-2028 forecast period, the market is forecast to achieve a five-year compound annual growth rate (CAGR) of 19.4%. And what is so impressive about this megatrend is that Nvidia chips are the prime beneficiary as they have largely displaced almost everyone else (for example Intel (INTC)) and now reign superior in this ongoing long-term megatrend (i.e. "the great cloud migration"). And as if the cloud migration wasn't already big enough, the release and growth of new artificial intelligence models have now accelerated the need for cloud data storage (and the need for Nvidia chips in particular). Features in chips, systems and software make NVIDIA GPUs ideal for machine learning with performance and efficiency enjoyed by millions. and ChatGPT provided a powerful example of how GPUs are great for AI. The large language model (LLM), trained and run on thousands of NVIDIA GPUs, runs generative AI services used by more than 100 million people. So considering the dominance of Nvidia chips (and the incredible demand growth), it's easy to see how so many people have gotten so excited about investing in Nvidia. However, there are risks that need to be considered too. The two most common negative things ("risks") I hear about Nvidia are (1) it's just the tech bubble all over again, and (2) you do realize chip stocks are notoriously cyclical and Nvidia is obviously overdue for a massive pullback. And regarding the first, I'm sure you've likely heard the objection (which I agree with) that the tech bubble was based on wild hysteria over the amazing new internet (whereby companies with zero earnings and very little revenues were trading at inappropriately high nosebleed stocks prices) as compared to Nvidia today (a company with massive revenue, massive growth and truly incredible bottom line net profits). The tech bubble was a "bubble;" Nvidia is a profitable revenue-growth juggernaut. Regarding the second point, yes chips stocks are cyclical (see chart below), but they are also the direct beneficiaries of the two long-term megatrends described above (i.e. the cloud and AI) and any company that can stay ahead of the competition (like Nvidia has so far) is positioned to reap truly massive rewards over the long-term. But to at least acknowledge the cyclicality risk a bit more, yes, supply and demand are dynamic and there is the risk for market cycle pullbacks (as Nvidia and other chips stocks have experienced repeatedly throughout history). But again, what makes Nvidia special is its leading position as a beneficiary from the two megatrends which continue to create massive long-term upside potential. Worth mentioning (from a supply and demand standpoint), there have been delays in the release/availability of Nvidia's next-generation Blackwell chips, but arguably this has helped spread-out future revenues a little better, and the company has not been significantly hurt so far because of it. Obviously, competition is a risk too (after all, 25 years ago Intel was the semiconductor leader believed by many to be unstoppable) and current day competitors like AMD and other mega cap companies' continued attempts to develop homegrown chip alternatives present a threat. But as long as Nvidia remains a leader (like it is positioned to be with its upcoming 2026 platform called "Rubin") it will continue to benefit from AI and the great cloud migration, as noted in this article: Dominating roughly 80% of the market for AI chips, Nvidia stands in a unique position as both the largest enabler as well as beneficiary of surging AI development. Aside from competition and cyclicality (and comparison's to the "tech bubble"), geopolitics and tough comps also present risks, as we noted in our previous Nvidia article: "Despite Red Flags, It's Going Much Higher." It's always worth a good laugh when some "super smart" analyst attempts to use old school valuation models and metrics to value a disruptive growth company like Nvidia because the rates of growth and innovation are so fluid that even the slightest over or under projection assumption can make the entire valuation exercise nearly worthless. Nonetheless, here are the current ratings and price targets of the 60 Wall Street analysts following Nvidia over the last 90 days. As you can see, these guys and gals rate Nvidia a "Strong Buy" with more than 10% upside (versus the current share price). But you can also see (below) how quickly their price targets and ratings have changed (i.e. they basically follow the actual stock price) in the relatively recent past. However, for a little comparative analysis, here is a look at how Nvidia rates on a forward PEG ratio (price/earnings to growth) basis as compared to other mega caps and to other leading chip stocks. And if this data (above) is at least as accurate as the broad side of a barn, it seems Nvidia's valuation is relatively quite attractive. Of course there are certain critical assumptions (such as future growth rates) baked into these data, and if they're off by even a small amount then the price targets would be significantly different. And for one more data point, here is a look at current and forward price-to-sales ratios for Nvidia versus Intel (below). As you can see, the two are being valued quite differently. What's more, analyst valuations typically only forecast revenue and growth a few quarters and years into the future, when in reality the great cloud migration megatrend has many more years to run (even beyond analyst projections). So there is a lot of uncertainty baked into these Wall Street analyst ratings, to say the least. If you are going to invest in Nvidia (or any individual stock for that matter) it's critical to consider your absolute and benchmark relative allocation strategy. If you think you are going to outperform an S&P500 index fund, you should at least know that Nvidia is currently around 6.5% of the S&P500. So if you hold less than 6.5% of Nvidia in your personal portfolio (say you have a 5% position size, which means you are "underweight" Nvidia) then you are basically making an active bet that you think Nvidia will underperform the market (after all, every passive S&P500 index fund investor owns more Nvidia than you). Obviously, every investor has their own unique goals and asset allocation (for example, perhaps you own mostly US treasury bonds for the safe income) thereby making a comparison of your holdings to the S&P500 apples-to-oranges. But it's worth giving some thought to how big of a bet you want to make on Nvidia (by being either over or under weight versus a passive index fund like the S&P 500). And for a little more perspective, if you are focused more on the absolute volatility of your portfolio (in terms of standard deviation) instead of the performance (or tracking error) of your portfolio versus an index fund (like the S&P 500) then this old school 1970's study from Fisher and Lorie suggests holding at least 25-30 individual stocks is the "magic number" to diversify away the lion's share of idiosyncratic stock-specific risks (which you may or may not want to do, depending on your personal goals and tolerance for certain types of risks). Nvidia is set to announce earnings after the market close on Wednesday August 28th, and the latest concerns surround the extent to which recent Nvidia Blackwell delays will push out revenue. Specifically, a Reuters report back in May noted, according to an Nvidia spokesperson: "As we've stated before, Hopper demand is very strong, broad Blackwell sampling has started, and production is on track to ramp in the second half." Jefferies analyst Blayne Curtis expects another: "strong beat in [for] July and strong guidance into October" from Nvidia, with beats of about $1B for both results and guide." Such an outcome would likely add support to the current share price. However, more importantly, for simple risk management purposes, investors may want to consider whether their current allocation to Nvidia (whether over or underweight) is still prudent ahead of potential earnings price volatility and in light of their own individual situation. Nvidia's long-term potential remains compelling. However, if you are losing sleep over near-term volatility (especially ahead of earnings) that could be an indication that you need to "rightsize" your allocation (both absolute and benchmark relative) and perhaps even consider Nvidia as part of a partially-indexed core-satellite strategy (as we recently wrote about here). Most importantly, you need to do what is right for you, based on your own personal situation. Disciplined, goal-focused, long-term investing continues to be a winning strategy.
[3]
3 Reasons I Bought More Nvidia Ahead Of Earnings And You Should Too (NASDAQ:NVDA)
For Nvidia (NASDAQ:NVDA) investors, it's been a volatile few weeks. From its record high of $140, NVDA fell to an intraday low of $90 on Aug. 5, during the Yen Carry Trade unwind. That's a 35% peak decline in a matter of weeks. And then, from those lows, that panic-selling bottom NVDA soared 45% to $130 before pulling back a bit on Tuesday. I Was "Greedy When Others Were Fearful" During The Recent Market Pullback I use limit orders to keep the target allocation for each stock steady during corrections. Thanks to NVDA's normal volatility, which fell 2.35X as much as the S&P in this pullback (and 3.5X peak intraday decline), I could buy 650 shares of NVDA going into earnings. I didn't nail the bottom, just over $90 on Aug. 5, but I got close. Was I worried as these limits filled? Not at all because I was tracking NVDA's fundamentals each day and could see that, despite the market panic triggered by hedge fund margin calls and algo trades, NVDA's future was brighter than ever. So let me share the three reasons I bought so much NVDA in recent weeks, going into earnings. More importantly, these three reasons explain why it's still a potentially good idea for long-term dividend growth investors to buy NVDA even after its strong 40% recovery from its recent lows. NVDA reports on Wed, Aug. 28, after the close. As usual, earnings estimates have steadily risen by the week, with 37 upward revisions this quarter. NVDA's track record for beating earnings is perfect over the last two quarters since the company is supply constrained. According to Bloomberg, the average sales beat and raise over the last six quarters is 12.5%. Taiwan Semi (TSM) says it's increasing wafer production by 150% in 2024 and 100% in 2025, but thinks that NVDA will remain supply constrained through the end of 2025. Regarding its No. 1 supplier, TSM announced 45% sales growth courtesy of insatiable AI chip demand. Goldman Sachs: Analysts at Goldman Sachs are confident that Nvidia will surpass Street consensus for revenue and earnings per share, driven by robust Data Center revenues and operational leverage. They maintain a "buy" recommendation, highlighting Nvidia's competitive edge in AI and accelerated computing. Jefferies: Analysts at Jefferies expect Nvidia to continue delivering earnings beats, although the magnitude of these surprises may be smaller than in previous quarters. They anticipate a strong earnings report for July, with guidance into October expected to exceed expectations by about $1 billion. Wedbush Securities: Wedbush maintains confidence in Nvidia's ability to report a strong quarter, citing robust AI spending as a critical driver. They expect Nvidia's earnings to be bolstered by increased investment from significant customers in AI infrastructure. Morningstar and Morgan Stanley estimate 15% to 17% sales beats and raises, and KeyBank is similarly bullish, especially for 2025. All of the hyperscalers, the tech giants driving 50% of sales for NVDA right now, including Microsoft (MSFT), Alphabet (GOOG), Meta (META), and Amazon (AMZN), have announced they are increasing AI capex this year and next year. That likely means that NVDA could sell 100% of production at full prices and record margins. In terms of figures, Team Green is expected to ship 60,000 to 70,000 units of Nvidia's GB200 AI servers, and given that one server is reported to cost around $2 million to $3 million per unit, this means that Team Green will bag in around a whopping $210 billion from just Blackwell servers along, that too in a year." - Morgan Stanley KeyBank has similar estimates of "at least $210 billion" in Blackwell sales alone in 2025. Now that NVDA has announced a delay in Blackwell shipments (which were supposed to start shipping in Q4), 2024 Blackwell shipments may be pushed into 2025. As I explained in my last pre-earnings update, NVDA's potential sales in 2025, per TSMC's supply ramp, is up to $265 billion. KeyBank and Morgan Stanley now estimate $210-plus billion in Blackwell sales alone. Remember that Hopper isn't going away; it will be a lower-cost option. $265 billion is the maximum theoretical revenue that NVDA could achieve if TSM's ramp is successful and they can sell 100% of capacity at full price. Before the Blackwell delay announcement, Morgan and KeyBank estimated $210 billion in Blackwell sales, not including other chips, auxiliary hardware, or NIMS (software). The current consensus is $164 billion in 2025 sales, up from $160 billion a few weeks ago and $150 billion at the start of the year. NVDA can theoretically sell up to $265 billion next year, and Morgan Stanley and KeyBank expect around $230 billion. What does that mean for potential free cash flow, beating expectations? Assuming stable 49% free cash flow margins (analysts expect this), potentially representing $65 to $70 billion in sales beat and $114 billion in free cash flow next year. The current consensus is $78 billion, representing a 46% free cash flow beat next year. $114 billion in FCF would be 13% higher than the $101 billion in 2026 FCF analysts currently expect. Nvidia Fair Value Profile The current analyst base case is that NVDA will have almost 40% upside by the end of next year. If NVDA achieves the maximum growth numbers that Morgan Stanley and KeyBank expect, there is as much as 141% upside return potential, which is justified by fundamentals. Goldman Sachs thinks AI will start to affect GDP growth in 2027 and corporate profits potentially in 2026. In other words, 2024 and 2025 are about supply constraints, with hyper-scalers seemingly willing to buy 100% of NVDA capacity (as well as some AMD and INTC AI chips). In 2026 and 2027, the years of Rubin and Rubin Ultra chip platforms, NVDA expects hyperscalers to be a minor part of its business as other companies and governments (sovereign AI) continue buying the best chips (for cost efficiency). The median growth consensus for NVDA is 44.1%, up from 40% a few weeks ago and 33% three months ago. Long-term EPS growth estimates range from 10% to 54%, with 44.1% being the median. And remember, this is the current consensus. If NVDA achieves $114 billion in FCF next year, as Morgan Stanley thinks is possible, then NVDA will have achieved 2027 results in 2025. "Nvidia's margins can't possibly be maintained for long." Not according to the median consensus of 62 analysts who cover it for a living and have access to channel checks and data that YouTube prognosticators can't even dream of. Why is that plausible? Because Nvidia's value proposition to customers is long-term overall operating costs. Microsoft is one of the largest GPUs, AI, and cloud computing infrastructure buyers. Do you think MSFT minds paying $35K to $40K for the world's most advanced AI chips if it means that long-term operating costs per token are 50% to 75% lower than what cheaper but less advanced chips can achieve? The world's wealthiest companies can easily afford to take the long-term view, and Nvidia is guiding for at least 50% reductions in cost/token computing costs each year after 2025. In other words, NVDA is already the low-cost leader in long-term AI operating costs. For example, Blackwell is 4X better at training and 30X at inference (operating AI models) than Hopper. Hopper replaced the Ampere generation of AI chips and was 9X better at training and 30X better at inference than Ampere. In other words, between the A100 chips and B200 chips, NVDA has improved training capacity by 36X and inference by 900X. And here's why that matters. Most companies don't need to do 900X faster inferencing. At some point, speed isn't as important as energy and cost. In the next decade, Jensen Huang says Nvidia can help accelerate computing power in AI by one million-fold. For a company like MSFT or OpenAI, it might be optimal to try to have the best LMM, such as Project Stargate (a 10 million GPU model in 2028 costing over $100 billion), 1 million times faster for the same cost and energy. Today's AI inference speed might be fine for others but at 1 million times lower cost (per token). For some countries, the goal might be to achieve 1 million times lower energy (and cost) with the same speed as today. Or, for most companies, 1000X faster AI computing for 1000X lower cost (and 1000X lower energy usage) might be the ideal sweet spot. NVDA spends $10 billion annually on R&D, and that is expected to continue growing. NVDA's R&D spending is expected to triple to $30 billion in 2029. The company's historical median 20% to 25% free cash flow return on invested capital means its growth spending is highly efficient. In the last year, the free cash flow return on invested capital reached 87%. That isn't sustainable, but for now, the more Nvidia spends, the faster it grows. The more you buy the more you save." - Jensen Huang's sales pitch at keynotes Since 2022, NVDA has been rated among the top five companies to work for, and its workers consider Jensen Huang the fourth-best CEO. 76% of NVDA employees are millionaires. If you're a world-class AI computer engineer, do you want to work for Intel? Or a thriving industry leader whose CEO is hailed as a genius? Happy, productive employees who are getting richer than they ever dreamed of are another reason to ride strong with Jensen Huang. Jensen Huang Owns 3.5% Of NVDA And Is The 12th Richest Person On Earth. Jensen Huang has been CEO for 30 years (he co-founded the company). He says he plans to remain CEO for 30 to 40 more years. If he can keep his finger on the pulse of future tech as well as he has for the last 30 years, Huang would be a potential Buffett of tech. (Source: Portfolio Visualizer) NVDA didn't start paying a dividend until 2012. And the dividend was low and frozen for years. Yet, NVDA has delivered 4X as much inflation-adjusted income as the S&P and almost 10X more than the Nasdaq. Income investors who have owned NVDA since the beginning now see an inflation-adjusted 47% yield on cost. Analysts say every $1000 invested at the IPO now pays $470 yearly in dividends, likely to grow 40% to 50% annually. No one denies that Nvidia is an amazingly innovative or fast-growing company. The major concern many investors have is whether it's overvalued. NVDA is trading at 38.87X forward earnings. Its free cash flow yield (EV/FCF) is 40.3X next year's estimates, representing a forward cash-adjusted PEG ratio of 0.91. Its current PEG ratio, which is measured the traditional way, is 0.95. So Peter Lynch would say that as long as NVDA keeps growing as expected, it will grow at a reasonable price. (Source: FactSet) A PEG of 1 is what NVDA has historically traded at for the last 20 years. That's not opinion. It's objective market fact. So, if NVDA trades at a PEG of 1, it's objectively not overvalued. The first 2030 sales estimates are $307 billion, or 17.8% annualized from 2024 to 2030. That means a historical fair value of $6.6 trillion to $7.8 trillion or 172% to 222% consensus fundamentally justified fair value by the end of 2029. That would translate into 22% to 27% annualized returns for the next 5.5 years. Remember, these are the current consensus estimates, which rise with each quarterly beat and raise. Options markets are currently pricing in an 8.7% single-day swing post earnings. But that doesn't mean NVDA will go up or down 9%. I expect a 5% to 15% pop, but up to a 25% upward move might occur if NVDA pulls off a big enough beat and raise. NVDA's epic gains have been justified by record fundamental growth. And the bear markets this year, which have been 25% and 35%, respectively, have not been justified by fundamentals. It's easy to be "greedy when others are fearful" when all the fundamentals are strong. But what happens when NVDA's earnings decline? (Source: Portfolio Visualizer, FAST Graphs, FactSet) In the last 15 years, NVDA's median EPS decline has resulted in a 24% earnings recession. The market's median reaction is a nearly 50% sell-off. If smaller language models prove as effective as LMMs or specialized chips better than GPUs, it could disrupt NVDA's current AI lead. New types of chips are indeed being developed to outperform traditional GPUs specifically for AI tasks. These include specialized AI chips, such as Application-Specific Integrated Circuits (ASICs) and Tensor Processing Units (TPUs), designed to handle AI workloads more efficiently than general-purpose GPUs. ASICs: These are custom-designed chips optimized for specific applications, including AI. They offer higher efficiency and performance for particular tasks compared to general-purpose GPUs. For example, Google's TPU is an ASIC designed to accelerate machine learning workloads. TPUs: Developed by Google, TPUs are specifically designed to accelerate machine learning tasks. They are optimized for TensorFlow, Google's machine learning framework, and provide significant performance improvements for AI applications. FPGAs: Field-Programmable Gate Arrays are another type of AI chip that can be configured for specific tasks. They offer flexibility and can be optimized for various AI workloads, although they may not always match the raw performance of ASICs or TPUs. These specialized AI chips are designed to handle the massive parallel processing required for AI tasks more efficiently than GPUs. They achieve this by optimizing for lower precision arithmetic, which reduces the number of transistors needed and, thus, the energy consumption while maintaining the necessary performance for AI algorithms. This makes them particularly suitable for applications like large language models, edge AI, and autonomous vehicles, where efficiency and speed are critical. NVDA must maintain its lead in AI tech. If it loses the ability to offer the lowest long-term cost option, then the bullish thesis will be thrown out the window. Here are the largest ten bear markets for NVDA. And here's how volatile this hyper-volatility, hyper-growth blue chip is. (Source: Portfolio Visualizer) In the last 25 years, NVDA has had 24 20%-plus monthly declines. It's fallen by almost 50% in a single month. It's fallen as much as 40% in a month while the S&P and Nasdaq were flat! If you can't tolerate such volatility and harness it to buy more when sentiment is at its worst, you will miss out on gains like these. (Source: Portfolio Visualizer) NVDA's 10% best months are a median 32% gain, with returns as strong as 82% in a single month. Can you see the benefits of rebalancing in real time during bear markets using managed futures and limit orders? The 40% monthly crashes are what create the 82% monthly rallies. If your optimal NVDA allocation is 5% and falls to 40%, you can buy with limits and keep it at 5% the entire time. And so when that face-ripping rally comes around, when NVDA overshoots to the bottom, you end up with maximum benefit. Then, at annual rebalancing time, you can harvest those excess returns, including bubble excess valuations, to buy high-yield blue chips to compound income even faster. This is why I am so bullish on NVDA. Not just for the growth, I don't care about growth. I care about total returns because income tracks total returns due to annual rebalancing. By harvesting the yield generated by NVDA's extreme volatility, long-term NVDA investors can maximize long-term inflation-adjusted income. NVDA's 5% worst months are 30% monthly declines, 3X worse than the S&P's 10% worst 5% of months, and 2X worse than the Nasdaq's 15% worst monthly returns. NVDA has historically captured 329% of the S&P's upside in exchange for 183% of its downside when stocks fall. It delivered safe 26% withdrawal rates, though no investor would ever be 100% NVDA as a retirement plan. By combining NVDA with managed futures, you can adjust its volatility to what you can emotionally and financially stand. For example, a 50% NVDA and 50% KMLM stock bucket captured 189% of the S&P's upside with 22% of its downside. 25% NVDA and 75% KMLM captured the market, 91% of the S&P upside, with a 26% downside. That means since Jan 2021, a 75% KMLM and 25% NVDA combo went up 9% when the S&P was up 10% and up 2.6% when the market was down 10%. This is the key to emotional and financial risk management. I'm not trying to time whether NVDA will soar or crash. No one can predict that. NVDA has risks, including regulatory/antitrust risks, R&D efficiency risks, and over 1,000 other risks. (Source: S&P) Let me be clear. I never make forecasts. I look at consensus fundamentals on quality companies and then calculate historically justified fundamental total return potentials. NVDA's year-end fair value is $147, and if it beats and raises as expected in this year's final two earnings, that would likely rise to $175 to $195. NVDA has been tracking its fair value for about 18 months, ever since the AI explosion sent its fundamentals soaring at record rates. Does that mean NVDA will be $200 by year-end? As we've seen many times this year, market sentiment is fickle. NVDA has soared as much as 50% in one month, crashed 25% weekly, and soared 45% in the last few days. But it keeps returning to historical fair value, 40.14X earnings, each time, as it has for the last decade. It's critical to size your financial and emotional risk with NVDA properly. This stock is the quintessential high-volatility hyper-growth blue chip. The downside volatility for the unprepared investor can be soul crushing. The volatility to the upside, created by the extreme crashes, is mind blowing. But leave the speculation to the traders. Don't attempt to time NVDA. If you like the company, including its visionary CEO, who says he wants to keep innovating, buy at a reasonable price ($132.83 fair value or lower) and hold on. Once my annual asset allocation is set, I manage my NVDA risk by rebalancing NVDA on a fixed schedule, using trailing stops. I'm targeting an 11% max overweight on NVDA this year because this is the lowest-risk year for NVDA. That's because TSM's ramp is running through 2025 (possibly into 2026), and TSM management has confirmed that NVDA will likely be capacity-constrained by the end of 2025. That means NVDA's potential for up to $265 billion in revenue capacity (if there is a demand) could support the $225 to $235 billion that Morgan Stanley and KeyBank expect. Given that the market is 12-month forward-looking, the risk of an air gap in demand (sudden decrease in orders and collapse of margins) is very low for all of 2024. Investors will start looking to 2026 in 2025, which means more significant risks of a sustained 40% to 60% bear market should an air gap develop. I'll adjust the target allocation to 7.5% to 9.1% next year, depending on the economy's performance, based on my portfolio's algo-driven risk management rules, which are optimized for my family's needs at any time. I'm not telling you what allocation to use for NVDA (if any). I'm using myself as an example. I share my limit buys, reasoning, and the best available data from all 62 analysts. But the bottom line? With high confidence, I'm riding into NVDA earnings with a record number of shares. Jensen fans will be impressed with the following:
[4]
Nvidia: Stanley Druckenmiller Is Out (NASDAQ:NVDA)
Despite our long-term optimism about AI, short-term uncertainties around Nvidia's revenue growth and margin stability warrant a more cautious investment approach. Nvidia Corporation (NASDAQ:NVDA), previously known for its role in graphics, gaming and crypto, recently became the leading provider of AI Compute and AI software solutions, after the company experienced a boom in Datacenter revenue. We have to admit that while we did anticipate back in 2022, before ChatGPT, that there would be a big switch to AI and Datacenter revenue, we certainly did not expect it to hit the scale it is at today. For reference, Nvidia is up 706.65% in merely 2 years time since we assigned it a "Buy" rating. Since then, we have continuously updated our thesis and maintained either "Strong Buy" or "Hold" ratings, despite the general bearish sentiment surrounding the stock, in which many initially referred to a bubble. This is the first time, however, we change our rating to a "Sell", and believe that it's finally time to trim after this ferocious rally. One other notable strategist, Stan Druckenmiller, has also signaled in a recent 13F filing that he's mostly sold out of the position as of Q1, selling 1,545,370 shares amounting to $192.52M. He currently owns 214,060 shares, down from the 9,500,750 he once owned in Q2 2023. Besides steadily trimming his positions in Nvidia, he has also been trimming positions in other AI plays like Microsoft (MSFT) and adding to other positions which may benefit from AI in an indirect way, as we discuss further. We remain bullish on AI in the long term, but present our thesis on why we see a likely overspend cycle happening in the short to medium term. We also see Nvidia being fundamentally overpriced for the first time, with the valuation being too far ahead in our view, even based on future earnings expectations. We anticipate challenging times ahead for Nvidia after its spectacular rally, which we believe will primarily be attributable to an "AI overbuild", or an overly optimistic infrastructure buildout in the short term, as was usually the case when investors were overly optimistic about future potential investments. Perhaps not exactly like the dot-com bubble, but people do still tend to got caught up in massive capital spending cycles, which Stan Druckenmiller has raised questions about as well, having the potential to impact margins and profitability drastically. Venture capital firm Sequoia calls this "the $600BN question," referring to the amount of revenue that end-user companies would need to generate to justify the current capital spending on GPUs. They derived this number by taking the current data center revenue for Nvidia, which is estimated to be $150BN already this year, after which another $150BN would have to be needed to run these GPUs in terms of AI data center infrastructure and utilities, as estimated by Nvidia. This means that to justify this $300BN capital expenditure, for end-user companies with a margin of about 50% in the technology and software industry, these companies would need to generate $600BN. While these may be quite rough calculations, they do show just how much end-user cash generation will be needed to sustain this infrastructure build-out that's ongoing. And from our perspective, we're not seeing this level of revenue generation or value creation just yet. Taking the poster-child company in the AI world, OpenAI, for example, is only expected to bring in $3.4BN in annual revenue. What's perhaps not often mentioned, and even worse, is the fact that OpenAI is currently quite the money void, as they've burned trough $8.5BN and are estimated to lose up to $5BN this year. This came after Sam Altman earlier this year went on an interview at Stanford and said: Whether we burn $500 million, $5 billion, or $50 billion a year, I don't care. I genuinely don't as long as we can stay on a trajectory where eventually we create way more value for society than that and as long as we can figure out a way to pay. (Sam Altman, Stanford) Since the spectacular launch and neck breaking user growth, ChatGPT went through in late 2022 to early 2023, website traffic does seem to have tapered off quite some more, perhaps raising questions about the payoff in terms of improvements in newer models compared to the capital spending required to create these models. One other prime example of a possible "AI overbuild" we believe is Andreessen Horowitz, a venture capital firm, who is simply stockpiling reportedly approx. 20,000 Nvidia GPU's under their "Oxygen" project, just in order to give their own portfolio companies access to these GPUs. Given the pricing of H100 GPU's, we could be talking about $500M to $800M worth of GPUs alone, not even taking into account the operational costs. Given they're renting out these GPUs at "below-market rates" also shows us the significant trust investors at the VC level are placing on the future ability of current start-ups to generate massive amounts of Free Cash Flow. Another indicator that we usually follow closely to track the development of Artificial General Intelligence is the prediction aggregator Metaculus. While the prediction timeline of the development of weakly Artificial General Intelligence drastically shortened in 2022 and continued through 2023, it is now reversing and predicting that the first weak General AI system will be announced in February 2028, against a prediction of 2026 at the end of last year. Despite the headwinds that we anticipate hardware vendors such as Nvidia and Big Tech spending huge amounts of CapEx on AI compute will face, we are by no means bearish on the entire AI space. We also certainly do not believe that this AI story is over; on the contrary, we are bullish about the long-term prospects for AI and believe there is likely a shift happening within the AI space itself from hardware and computers to end-user applications. We can see this shifting in Web search data, for example. Search interest in Nvidia's "CUDA" software has increased, but has mostly leveled off in the last few months. PyTorch and TensorFlow, which are key in building AI models, also show a flat to upward trend. Hugging Face, on the other hand, which is aimed at end users, does rise. Thus, we see the current period as a broader industry shift focus from the hardware-level and model building to end-user applications and integration. Another example, as mentioned earlier, AI data centers consume a lot of electricity, and Stan Druckenmiller seems to be leaning in that direction, as he entered new positions in several energy companies last quarter. It could also be a rather defensive play in utilities in anticipation of a future recession. He also took a new position from Adobe (ADBE), currently one of the most prominent players in the AI space, integrating it into their software solutions focused on end-user value creation. In the days of the dot-com bubble, a similar overbuild of infrastructure happened as well with fiber, as everyone over-estimated the demand in the short term. A lot of these infrastructure providers saw massive margin compression along with financial difficulty all the way to 2009. On the other hand, this overcapacity did drive down prices and made way for new innovations on a long-term timeframe like Amazon (AMZN), and other big tech companies. AI could be similar to this situation, as Stan Druckenmiller describes it: AI could rhyme with the internet, as we go through all this Capital spending we need to do. The payoff, while it's incrementally coming in by the day, the big payoff might be 4 to 5 years from now. So AI might be a little overhyped now, but underhyped long term. (Squawk Podcast) In similar fashion to the $600BN question, we also ask ourselves a much broader question about the Magnificent 7, which currently have a valuation of $15.57T, or $12.43T excluding Nvidia. The reason for the focus on these 7 companies is due to the fact that they make up a big chunk of Nvidia's revenue. The 4 AI Hyperscalers which include Microsoft, Meta (META), Amazon (AMZN) and Google (GOOG) reportedly make up for 40% of Nvidia's revenue alone. Some analysts even estimate that Nvidia's top 10 customers make up for 60% to 70% of all revenue. Nvidia is by far the first company who has been able to tap into the largest pool of Free Cash Flow, the Magnificent 7, at a time when they're hungry for growth. As a result, the CapEx of the Magnificent 7 saw a drastic increase, with a large amount of it being allocated to AI datacenter. In a sense, with some analysts calling the Magnificent 7 a bubble, you could call Nvidia a "bubble inside another bubble." As shown in the graph above, the CapEx expansion of the 4 AI Hyperscalers closely matches with the huge increase in Free Cash Flow growth Nvidia has seen. The question now remains whether the customers of these AI Hyperscalers, which we believe are largely startups backed by the VC community and the Magnificent 7 itself, will be able to sustain the soaring costs of compute imposed on them by Nvidia, which sells GPUs at a sky-high markup. In theory, this CapEx spend should only survive if AI generates enough money/value or if VCs and big tech companies are willing to continue funding losses. The last reason why we've changed our stance on Nvidia, is quite clearly due to valuation concerns. On a fundamental level, when Nvidia recently became briefly worth more than Apple (AAPL) and Microsoft in terms of valuation, we ran a thought experiment and assessed how vital Microsoft, Apple and Nvidia are to the global economy. As recently showcased with an outrage in Microsoft Windows, due to an error made by CrowdStrike (CRWD), millions of computers were disrupted, leading to flights being grounded, supply chains being disrupted, massive business/ government agencies disruptions and even critical infrastructure like basic healthcare systems malfunctioning. Similarly, we would view Apple products and their ecosystem currently as indispensable without causing utter chaos and disruption on a global scale. While Nvidia on the other hand is important in areas like AI Compute, graphics, the scientific community and breakthroughs in Autonomous Driving, we just don't see it underpinning the economy as much as Apple and Microsoft do. Similarly, we also don't see why Nvidia should be trading at over 3x Taiwan Semiconductor's (TSM) valuation, given Nvidia's massive reliance on them to fabricate chips. From our view, Taiwan Semiconductor has a lot more leverage on Nvidia than some may think, and is already hinting to push through costs to Nvidia in the form of higher prices. Nvidia for example recently left the idea of a future Samsung Fab behind, which we believe is likely due to a far superior Wafer Yield that's obtained with TSM. Not only does Nvidia solely rely on TSM's production, but so does virtually the whole world, making us puzzled about the recent divergence in TSM's valuation compared to Nvidia. And even with TSM building a fab in the United States, we expect it to be years behind when it comes online. TSM's CEO Morris Chang himself even states that the fab would be 2-2.5x more expensive, and labor costs 50% higher. In addition, we see Nvidia playing out quite similar to how Tesla (TSLA) played out in 2023 in a sense that, we see downside risks to margins once supply finally catches up with demand and any additional competition enters the space. Similar to Nvidia, Tesla also had the privilege of being years ahead of competitors in terms of EV production and expertise, which allowed them to expand gross margins to a record of 29.11% in Q1 2022. Today, however, gross margins are about the lowest they've been in 5 years at 17.95% as of last quarter. While Nvidia was able as we expected to expand margins in a tight market, we expect the opposite could happen as well when demand eases. Even if competition in Nvidia's case isn't able to catch up or challenge Nvidia's "CUDA" software stack, they'll very likely still face massive scrutiny, as they're already under, from the DOJ and FTC, receiving a ton of antitrust litigation, as has OpenAI, Microsoft and Google. Taking into account the aforementioned risks, we find the current valuation Nvidia is trading at to be rather irrational. At a future expected Q2 EBITDA of $18.9BN, or $75.6BN at an annualized rate, even at the generous 30x multiple we usually give Nvidia it should be worth an Enterprise Value of $2.27T. Adding $20.20BN worth of Net Debt would bring us to a market cap of $2.29T or $93.09 per share at a very generous valuation of 30x Forward EV/EBITDA. From a Price to Free Cash Flow ((P/FCF)) perspective, Nvidia looks even more expensive, generating $14.98BN in FCF or $59.90BN annualized. Taking into account an additional $4.04BN in Share Based Compensation, Nvidia would be trading at an adjusted P/FCF multiple of 56.93x. As things stand right now, with year-over-year revenue growth according to estimates likely to peak now in Q2, and operating margins standing at an eye-watering 64.9% compared to the usual 30-35% in recent history, we see most of the risks like disappointing growth or demand pointed to the downside. As Stan Druckenmiller would say: it's not a fat pitch. Recent insider selling has also been ramping up, as can be seen in the graph below, with Jensen Huang being the most prominent seller. Since 2021, according to our data, there has only insider selling, with the cumulative volume totaling approx. $2.87BN. Although we are changing our rating for Nvidia from "Strong Buy" to "Sell," we are generally not overly bearish on the stock, but we do believe that it is finally time to seriously wind down or even exit the position completely. To be clear, there is no universe in which we would consider shorting the stock, as we are bullish on AI over the long term. Perhaps we could be wrong on some level, for example maybe underestimating Nvidia's Blackwell Architecture potential, and be proven wrong. But as things stand, the risks for Nvidia appear to be downside, and we think valuation is ahead of the curve, even on a forward basis. Still, we think there are likely other strategies to gain exposure to AI, should an "AI Overbuild" occur, which could see some serious tailwinds, such as energy and utility stocks and companies that benefit from implementing AI at the end-user level.
Share
Share
Copy Link
As NVIDIA approaches its earnings report, investors are divided. Some see potential for continued growth, while others express caution due to high valuation and market expectations.
NVIDIA Corporation, a leader in the semiconductor industry, is approaching its highly anticipated earnings report. The company has been riding a wave of success, driven by the booming demand for AI chips and graphics processors. Investors and analysts are closely watching NVIDIA's performance, given its significant impact on the broader tech sector and its position at the forefront of the AI revolution 1.
Many investors remain bullish on NVIDIA, citing several key factors supporting their optimism. The company's dominance in the AI chip market, coupled with the expanding applications of AI across various industries, presents a strong case for continued growth. Some analysts argue that any dips in NVIDIA's stock price should be viewed as buying opportunities, given the company's long-term potential 1.
Furthermore, NVIDIA's diversified product portfolio, including its data center solutions and gaming hardware, provides multiple revenue streams and helps mitigate risks associated with market fluctuations 3.
Despite the positive outlook, some investors and analysts express caution regarding NVIDIA's current valuation. The company's stock has experienced significant appreciation, leading to concerns about whether future growth is already priced in. Critics argue that the high expectations set by the market may be challenging to meet consistently, potentially leading to volatility in the stock price 2.
Notable investors, such as Stanley Druckenmiller, have recently adjusted their positions in NVIDIA, citing short-term uncertainties. This move has sparked discussions about the appropriate investment approach for the company. While some advocate for a more cautious stance given the current market conditions and NVIDIA's valuation, others maintain that the company's long-term growth prospects outweigh short-term fluctuations 4.
NVIDIA's performance is closely tied to broader industry trends, including the ongoing chip shortage and the rapid advancement of AI technologies. The company's ability to navigate supply chain challenges and maintain its technological edge over competitors will be crucial factors in its future success. Investors are keenly watching how NVIDIA manages these industry dynamics and capitalizes on emerging opportunities in areas such as autonomous vehicles and edge computing 2.
As NVIDIA prepares to release its earnings report, the market remains divided on the company's near-term prospects. While the long-term growth narrative remains strong, investors must weigh the potential risks against the rewards in this dynamic and rapidly evolving sector.
Reference
[1]
[2]
[4]
NVIDIA's recent performance and future outlook have captured investors' attention. This story examines the company's Q2 results, potential challenges, and long-term growth prospects in the AI and semiconductor markets.
6 Sources
6 Sources
NVIDIA faces scrutiny as Q2 earnings approach, with analysts warning of potential risks despite the company's recent surge in AI-related demand. Cloud hyperscalers' comments and market saturation concerns raise questions about NVIDIA's future growth.
3 Sources
3 Sources
NVIDIA's recent Q2 earnings report has sparked diverse reactions in the market. While the company posted strong results, concerns about future growth and valuation have led to a stock price decline.
7 Sources
7 Sources
NVIDIA's Q2 results show strong performance, but concerns arise over US revenue decline and valuation. The company's diversified portfolio and software strategy present opportunities beyond AI.
3 Sources
3 Sources
Nvidia's stock has become a hot topic in the investment world, with conflicting opinions on its valuation and future prospects. While some analysts see it as undervalued, others argue that the AI hype hasn't translated into higher earnings.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved