17 Sources
[2]
Nvidia shares spooked as China's DeepSeek raises questions over AI capex By Investing.com
Investing.com-- Artificial intelligence darling Nvidia's shares fell on Friday following the launch of Chinese generative AI program DeepSeek last week, which claimed to be able to outperform rival offerings at a fraction of their cost. NVIDIA Corporation (NASDAQ:NVDA) shares fell more than 3% to $142.62 on Friday, although they still ended higher for the week. DeepSeek- which is funded by Chinese quant firm High-Flyer- made its R1 LLM open source, and also released a paper outlining how advanced LLMs can be built on much smaller budgets. DeepSeek had access to about 50,000 Nvidia H100 AI GPUs- the previous generation of Nvidia's AI GPUs, which are set to be replaced by the Blackwell series. It was unclear to what extent the company had leveraged this access for its LLM. But the biggest point of contention over DeepSeek R1 was that the LLM was able to achieve competitive results despite its rivals, such as OpenAI's ChatGPT and Meta's Lama, having access to substantially bigger budgets and more advanced technology. This notion was relevant in the light of recent U.S. export controls on chip technology to China, as well U.S. President Donald Trump announcing $500 billion in private spending to build more U.S. AI infrastructure. DeepSeek R1 raised questions about just how justified the billions of capital expenditure, undertaken by Wall Street's technology giants, was necessary for AI advancements. Of the so-called Magnificent 7 stocks, Microsoft (NASDAQ:MSFT), Meta (NASDAQ:META), and Alphabet (NASDAQ:GOOGL) have announced rapidly increasing capital expenditures in recent quarters to further their AI ambitions. Analysts expect this to remain the case when they report their December quarter earnings this week. "It might be good news for the Mag-7 that can learn from DeepSeek to design AI systems with cheaper GPUs. That would reduce their capital spending and boost their profits. It might not be a happy development for Nvidia," Yardeni Research said in a note. Nvidia has benefited greatly from increased AI investment in the past two years, seeing exponential growth in sales as tech giants sought to bolster their AI development by increasing their data center capabilities. But DeepSeek's success raises questions over whether AI developers can achieve progress by building leaner, more efficient models that don't require as much investment in chip technology. JPMorgan's Joshua Meyers said concerns over higher AI budgets were "overdone," while noting that DeepSeek's efficiency may have been out of necessity, given that Chinese firms are blocked from access to advanced U.S. chip technology. "If DeepSeek can reduce the cost of inference, then others will have to as well, and demand will hopefully more than make up for that over time," Meyers wrote in a short note dated to Saturday.
[3]
Is Nvidia stock still the AI king or losing its edge: The DeepSeek effect
Nvidia's stock faces mixed signals as Taiwan Semiconductor Manufacturing (TSMC) reports strong growth in AI chip demand, while China's DeepSeek raises questions about AI-related capital expenditure. TSMC's bullish outlook contrasts with concerns over DeepSeek's cost-efficient AI advancements, impacting Nvidia's market position. Taiwan Semiconductor Manufacturing (TSMC) released its Q4 2024 results on Jan. 16, revealing a 37% year-over-year revenue increase to $26.9 billion. The company anticipates Q1 2025 revenue to grow 34% to $25.4 billion, with full-year 2025 revenue expected to rise in the mid-20% range. TSMC, which manufactures chips for Nvidia, predicts AI chip sales to double in 2025 due to "the strong surge in AI-related demand." This has led to a significant increase in capital expenditures, projected at $38 billion to $42 billion, with 70% allocated to advanced chips. TSMC's advanced chips, based on 7-nanometer (nm) or smaller process nodes, accounted for 74% of Q4 revenue, up from 67% a year ago. Notably, 60% of revenue came from 3nm and 5nm chips, which are critical for manufacturing AI GPUs, including Nvidia's. Nvidia has reportedly secured 60% of TSMC's advanced chip packaging capacity for 2025, positioning it to meet growing demand for its Blackwell GPUs. Despite challenges like U.S. export restrictions, TSMC's management believes these restrictions are "manageable," offering a positive outlook for Nvidia. China's DeepSeek has disrupted the AI landscape with its cost-efficient large language models (LLMs), raising questions about the necessity of high capital expenditure in AI development. DeepSeek's latest model, developed at a fraction of the cost of U.S. competitors, reportedly used 50,000 of Nvidia's H100 AI GPUs. This has led to concerns that tech firms may adopt leaner approaches, potentially reducing demand for advanced AI chips. Nvidia's shares fell 5.2% following DeepSeek's announcement, reflecting market uncertainty. How to setup DeepSeek-R1 easily for free (online and local)? While some analysts, like those at JPMorgan, argue that concerns over AI budgets are "overdone," others, such as Raymond James' Srini Pajjuri, question how DeepSeek's efficiency will impact compute intensity and semiconductor demand. DeepSeek's success, achieved with $5.6 million over two months, contrasts sharply with the massive investments by U.S. tech giants. This has led to speculation about potential profit margin squeezes for companies like Google, Meta, and Microsoft, which are heavily investing in AI infrastructure. Despite a 171% stock gain in 2024, Nvidia remains attractively valued at 30 times fiscal 2026 earnings estimates. Analysts project a 51% increase in earnings to $4.45 per share, with a price/earnings-to-growth (PEG) ratio of 0.99, indicating potential undervaluation. TSMC's optimistic outlook and increased capex spending further bolster Nvidia's growth prospects. However, competition from companies like AMD and the potential for technological breakthroughs, such as quantum computing, pose risks to Nvidia's dominance. Looking ahead, Nvidia is expected to remain a key player in AI, with opportunities in autonomous ride-hailing, augmented reality (AR), and virtual reality (VR). Analysts predict a compound annual growth rate (CAGR) of 15% over the next decade, potentially quadrupling Nvidia's market cap to over $14 trillion by 2035. However, macroeconomic factors and competition could temper this growth, making Nvidia's future both promising and uncertain. Disclaimer: The content of this article is for informational purposes only and should not be construed as investment advice. We do not endorse any specific investment strategies or make recommendations regarding the purchase or sale of any securities.
[4]
The DeepSeek sell-off: What major analysts are saying about Nvidia, possible AI bubble popping
Wall Street woke up Monday to an artificial intelligence stock rout driven by the emergence of Chinese startup DeepSeek, which gained buzz over the last week for possibly building a competitive model for a fraction of the money that big U.S. companies such as Meta Platforms and Microsoft are currently spending. Early takes from the Street show analysts and strategists are concerned that capital expenditures could come down initially if the DeepSeek story is even partly true, but that overall it will prove a buying opportunity for the stocks. But for now, a major sell-off was playing out. Nvidia shares were among the hardest hit as the need for the AI leader's fastest chips were called into question. "The AI investment cycle may be over-hyped and a more efficient future is possible," wrote JPMorgan analyst Sandeep Deshpande, in a note. Here's a wrap of what major analysts are saying: JPMorgan analyst Sandeep Deshpande: "Investors are concerned that rather than impede China's progress in AI, the US restrictions have engendered innovation that has enabled the development of a model that prioritises efficiency. ... The news over the past few months has been about the huge capex announcements of Microsoft, which is spending $80bn in '25, while Meta recently announced investments between $6bn and $65bn. Open AI also announced that the Stargate project intends to invest $500m over the next four years building new AI infrastructure in the US. Thus, with these considerable sums flowing into AI investments in the US, that Deepseek's highly efficient and lower resource-intensive AI model has shown such significant innovation and success is posing thoughts to investors that the AI investment cycle may be over-hyped and a more efficient future is possible." Jefferies analyst Edison Lee: "Re-evaluating computing power needs could cause 2026 AI capex to fall (or not grow)...We believe DS's success could drive two possible industry strategies: 1) still pursue more computing power to drive even faster model improvements, and 2) refocus on efficiency and ROI, meaning lower demand for computing power as of 2026." Bernstein analyst Stacy Rasgon: "Is DeepSeek doomsday for AI buildouts? We don't think so...we believe that 1) DeepSeek DID NOT "build OpenAI for $5M"; 2) the models look fantastic but we don't think they are miracles; and 3) the resulting Twitterverse panic over the weekend seems overblown." Though Rasgon acknowledged DeepSeek's models are good. The analyst kept his outperform ratings on Nvidia and Broadcom, advising clients not to buy into the doomsday scenarios. Citi analyst Atif Malik: "While the dominance of the US companies on the most advanced AI models could be potentially challenged, that said, we estimate that in an inevitably more restrictive environment, US' access to more advanced chips is an advantage. Thus, we don't expect leading AI companies would move away from more advanced GPUs." Malik maintained a buy rating on Nvidia . Raymond James' semiconductor analyst Srini Pajjuri: "If DeepSeek's innovations are adopted broadly, an argument can be made that model training costs could come down significantly even at U.S. hyperscalers, potentially raising questions about the need for 1-million XPU/GPU clusters as projected by some...: A more logical implication is that DeepSeek will drive even more urgency among U.S. hyperscalers to leverage their key advantage (access to GPUs) to distance themselves from cheaper alternatives." Pajjuri reiterated buy ratings on Nvidia and ASML. Cantor analyst C.J. Muse: "Following release of DeepSeek's V3 LLM, there has been great angst as to the impact for compute demand, and therefore, fears of peak spending on GPUs. We think this view is farthest from the truth and that the announcement is actually very bullish with AGI seemingly closer to reality and Jevons Paradox almost certainly leading to the AI industry wanting more compute, not less." Muse said buy Nvidia on any weakness.
[5]
China's DeepSeek AI Model Shocks the World: Should You Sell Your Nvidia Stock? | The Motley Fool
Could Nvidia's (NVDA -15.01%) magical two-year run be coming to an end? Up until now, there has been insatiable demand for Nvidia's latest and greatest graphics processing units (GPUs). As the artificial intelligence races heated up, big tech companies and start-ups alike rushed to buy or rent as many of Nvidia's high-performance GPUs as they could in a bid to create better and better models. But last week, Chinese AI start-up DeepSeek released its R1 model that stunned the technology world. R1 is a "reasoning" model that has matched or exceeded OpenAI's o1 reasoning model, which was just released at the beginning of December, for a fraction of the cost. Being able to generate leading-edge large language models (LLMs) with limited computing resources could mean that AI companies might not need to buy or rent as much high-cost compute resources in the future. The consequences could be devastating for Nvidia and last year's AI winners alike. But as always, the truth is more complicated. DeepSeek is an AI lab spun out of a quantitative hedge fund called High-Flyer. CEO Liang Wenfeng founded High-Flyer in 2015 and began the DeepSeek venture in 2023 after the earth-shaking debut of ChatGPT. DeepSeek has been building AI models ever since, reportedly purchasing 10,000 Nvidia A100s before they were restricted, which are two generations prior to the current Blackwell chip. DeepSeek also reportedly has a cluster of Nvidia H800s, which is a capped, or slowed, version of the Nvidia H100 designed for the Chinese market. Of note, the H100 is the latest generation of Nvidia GPUs prior to the recent launch of Blackwell. On Jan. 20, DeepSeek released R1, its first "reasoning" model based on its V3 LLM. Reasoning models are relatively new, and use a technique called reinforcement learning, which essentially pushes an LLM to go down a chain of thought, then reverse if it runs into a "wall," before exploring various alternative approaches before getting to a final answer. Reasoning models can therefore answer complex questions with more precision than straight question-and-answer models can't. Incredibly, R1 has been able to meet or even exceed OpenAI's o1 on several benchmarks, while reportedly trained at a small fraction of the cost. Just how cheap are we talking about? The R1 paper claims the model was trained on the equivalent of just $5.6 million rented GPU hours, which is a small fraction of the hundreds of millions reportedly spent by OpenAI and other U.S.-based leaders. DeepSeek is also charging about one-thirtieth of the price it costs OpenAI's o1 to run, while Wenfeng maintains DeepSeek charges for a "small profit" above costs. Experts have estimated that Meta Platforms' (META 0.97%) Llama 3.1 405B model cost about $60 million of rented GPU hours to run, compared with the $6 million or so for V3, even as V3 outperformed Llama's latest model on a variety of benchmarks. According to an informative blog post by Kevin Xu, DeepSeek was able to pull this minor miracle off with three unique advantages. First, Wenfang built DeepSeek as sort of an idealistic AI research lab without a clear business model. Currently, DeepSeek charges a small fee for others seeing to build products on top of it, but otherwise makes its open-source model available for free. Wenfang also recruited largely young people who have just graduated from school or who were in Ph.D. programs at China's top universities. This led to a culture of free experimentation and trial-and-error without big expectations, and set DeepSeek apart from China's tech giants. Second, DeepSeek uses its own data center, which allowed it to optimize the hardware racks for its own purposes. Finally, DeepSeek was then able to optimize its learning algorithms in a number of ways that, taken together, allowed DeepSeek to maximize the performance of its hardware. For instance, DeepSeek built its own parallel processing algorithm from the ground up called the HAI-LLM framework, which optimized computing workloads across its limited number of chips. DeepSeek also uses F8, or 8-bit, data input framework, a less-precise framework than F32. While F8 is "less precise," it also saves a ton in memory utilization, and R1's other processes were also able to then make up for the lack of precision with a greater number of efficient calculations. DeepSeek also optimized its load-balancing networking kernel, maximizing the work done by each H800 cluster, so that no hardware was ever left "waiting" for data. These are just a few of the innovations that allowed DeepSeek to do more with less. But when cobbling all of these "hacks" together, it led to a remarkable increase in performance. The negative implication for Nvidia is that by innovating at the software level as DeepSeek has done, AI companies may become less dependent on hardware, which could affect Nvidia's sales growth and margins. As dire as R1 may seem for Nvidia, there are several counterpoints to the thesis that Nvidia is "doomed." First, some are skeptical that the Chinese startup is being totally forthright in its cost estimates. According to machine learning researcher Nathan Lampbert, the $5.6 million figure of rented GPU hours probably doesn't account for a number of extra costs. These extra costs include significant pre-training hours prior to training the large model, the capital expenditures to buy GPUs and construct data centers (if DeepSeek truly built its own data center and didn't rent from a cloud), and high energy costs. There is also the matter of DeepSeek's engineering salaries, as R1 had 139 technical authors. Since DeepSeek is open-source, not all of these authors are likely to work at the company, but many probably do, and make a sufficient salary. Lampert estimates DeepSeek's annual costs for operations are probably closer to between $500 million and $1 billion. That's still far below the costs at its U.S. rivals, but obviously much more than the $6 million put forth by the R1 paper. There are also some who simply doubt DeepSeek is being forthright in its access to chips. In a recent interview, Scale AI CEO Alexandr Wang told CNBC he believes DeepSeek has access to a 50,000 H100 cluster that it isn't disclosing, because those chips are illegal in China following 2022 export restrictions. However, given that DeepSeek has openly published its techniques for the R1 model, researchers should be able to emulate its success with limited resources. As of now, it appears the R1 efficiency breakthrough is more real than not. While DeepSeek is no doubt impressive, ex-OpenAI executive Miles Brundage also cautioned against reading too much into R1's debut. Brundage notes that OpenAI is already out with its o3 model and soon its o5 model. While DeepSeek has been able to hack its way to R1 with novel techniques, its limited computing power is likely to slow down the pace at which it can scale up and advance from its first reasoning model. Brundage also notes that limited computing resources will affect how these models can perform simultaneously in the real world: Even if that's the smallest possible version while maintaining its intelligence -- the already-distilled version -- you'll still want to use it in multiple real-world applications simultaneously. You wouldn't want to choose between using it for improving cyber capabilities, helping with homework, or solving cancer. You'd want to do all of these things. This requires running many copies in parallel, generating hundreds or thousands of attempts at solving difficult problems before selecting the best solution. ... To make a human-AI analogy, consider Einstein or John von Neumann as the smartest possible person you could fit in a human brain. You would still want more of them. You'd want more copies. That's basically what inference compute or test-time compute is -- copying the smart thing. It's better to have an hour of Einstein's time than a minute, and I don't see why that wouldn't be true for AI. Finally, investors should keep in mind the Jevons paradox. Coined by English economist William Stanley Jevons in 1865 regarding coal usage, this is the phenomenon that occurs when a technological process is made more efficient. According to Jevon's paradox, if a resource is used more efficiently, rather than seeing a decrease in the use of that resource, consumption increases exponentially. The increased demand then usually more than fully offsets the efficiency gained, leading to an overall increase in demand for that resource. For AI, if the cost of training advanced models falls, look for AI to be used more and more in our daily lives. That should, according to the paradox, actually increase demand for computing power -- although probably more for inference rather than training. So that could actually benefit Nvidia, strangely. On the other hand, it is thought that AI inferencing may be more competitive relative to training for Nvidia, so that may be a negative. But that negative would arise from more competition, not decreased computing demand. The bottom line is that demand for AI computing should continue to grow a lot for years to come. After all, on Jan. 24, Meta Platforms CEO Mark Zuckerberg announced that Meta would be building an AI data center almost as big as Manhattan and will ramp up its capital spending to a range of $60 billion to $65 billion this year, up from a range of $38 billion to $40 billion in 2024. This announcement came four days after DeepSeek's release, so there was no way Zuckerberg wasn't aware of it. Yet he still thinks a huge 50%-plus increase in AI infrastructure spending is warranted. No doubt, the advent of DeepSeek will have an effect on the AI races. But rather than being "game over" for Nvidia and other "Magnificent Seven" companies, the reality will be more nuanced. As the AI races progress, investors will have to assess which companies have a true AI "moat," as AI business models evolve at rapid speed and in surprising ways, as DeepSeek R1 just showed.
[6]
Analysts aren't panicking about Nvidia stock after China's DeepSeek AI advance sparked a sell-off
This story incorporates reporting from Finbold | Finance in Bold and MSN. Following a dip in Nvidia's stock value due to concerns over AI capital expenditure and an AI advancement in China, top analysts have re-evaluated their positions. This reaction aligns with the release of DeepSeek's AI assistant app and its new large language models, DeepSeek-R1 and DeepSeek R1 Zero, earlier in the month. These advancements have sparked industry discussions about their potential impact on the AI landscape. Citi's Atif Malik maintained a "Buy" rating for Nvidia on Jan. 27, keeping the stock price target steady at $175. The assessment underscores U.S.-based AI firms' edge in accessing cutting-edge technology, a strategic advantage in the AI sector. This bullish stance reflects confidence in Nvidia's resilience despite competitive pressures from international startups like DeepSeek. Cantor Fitzgerald analysts echoed support for Nvidia, citing the firm's robust infrastructure and market position. The recent developments emphasize the significance of technological adaptability in maintaining market leadership. As DeepSeek's offerings continue to evolve, the focus remains on how established players like Nvidia will navigate this shifting terrain.
[7]
Nvidia shares slide 5% as China's DeepSeek sparks questions over AI-related capex By Investing.com
Investing.com-- NVIDIA Corporation (NASDAQ:NVDA) shares fell over 5% in 24 hour markets, RobinHood data showed on Sunday evening, amid growing questions over the need for major capital spending on artificial intelligence after the release of China's DeepSeek. Nvidia slid 5.2% to an indicated $135.20, RobinHood data showed, with shares extending a 3.2% loss from Friday. The AI darling was rattled by the release of DeepSeek R1- a large-language model that claims to rival offerings from ChatGPT and Meta (NASDAQ:META) while using a fraction of their budgets. DeepSeek- which is funded by Chinese quant fund High-Flyer- reportedly had access to about 50,000 of Nvidia's H100 AI GPUs, which are from the last generation of advanced AI chips. DeepSeek's release raised concerns that technology firms could adopt leaner, more capital-efficient approaches to AI development, necessitating lower amounts of capital expenditure on data centers and advanced AI chips. Yardeni Research analysts said that while major tech firms could learn from DeepSeek to design cheaper AI systems, "it might not be a happy development for Nvidia." JPMorgan (NYSE:JPM) analysts argued that concerns over higher AI budgets were "overdone," adding that DeepSeek's efficiency came more from necessity, especially in the light of strict U.S. export controls on China's chip industry. Six of Wall Street's so-called Magnificent 7 firms- which make up a bulk of Nvidia's biggest customers- are set to report quarterly earnings this week, and are widely expected to announce increased capex in AI development. AI giant OpenAI had last week also announced a joint $500 billion investment in U.S. AI infrastructure.
[8]
Nvidia Plummeted Today -- Time to Buy the Artificial Intelligence (AI) Leader's Stock? | The Motley Fool
Nvidia (NVDA -16.97%) stock saw a dramatic sell-off in Monday's trading following developments that have raised questions about the demand outlook for its advanced processors. The artificial intelligence (AI) leader's share price closed out the day's trading down 17.2%. Nvidia plummeted today as investors, analysts, and tech leaders parsed new information about DeepSeek R1 -- a new open-source AI model from a Chinese start-up that released last week. DeepSeek's R1 is delivering performance that measures up to and even exceeds OpenAI's latest commercially available ChatGPT system in some respects, raising concerns about the U.S.'s positioning in the AI race with China. But the real kicker for Nvidia is that the R1 model was supposedly developed and trained using a relatively small number of the hardware leader's processors. The training of DeepSeek's new model was reportedly conducted using Nvidia's A100 graphics processing units (GPUs) along with substantial support from less powerful hardware. Investors are concerned that R1 signals that high-performance AI models can be created with significantly lower hardware requirements, and Nvidia stock saw a big pullback as a result. Today's big sell-off for Nvidia stock is a reminder that AI industry is still young and volatile. While the GPU specialist currently commands significant technological and market-share advantages in the AI processor space, charting its long-term performance trajectory in the space still involves plenty of speculation. If the processing demands needed to develop, train, and scale artificial intelligence models wind up being significantly reduced, it's reasonable to expect that Nvidia could face growth headwinds and underperform Wall Street's growth estimates. On the other hand, there's still a lot about DeepSeek's R1 that isn't known yet, and some analysts and tech leaders have raised questions about whether it was really created without a large number of ultra-high-end processors. With the passage of time, it's possible that today's market reaction to the new Chinese AI model will wind up seeming severely overblown. Up to this point, the best AI models have been heavily reliant on Nvidia's high-end processors. Comments from Microsoft and Meta Platforms about their AI infrastructure spending plans and the $500 billion Stargate U.S. data center project generally point to a very strong demand outlook for the hardware leader. Nvidia's push into categories including artificial-intelligence-as-a-service and robotics also point to significant growth opportunities outside of the core processors space. So while shares could continue to see volatility as the market digests the potential implications of DeepSeek's RI and other catalysts, today's big sell-off actually could be a good entry point for the stock.
[9]
Wall Street Lunch: DeepSeek Disrupts The AI Trade - For Now
AI stocks got slammed as the China app challenged cost and demand assumptions. (0:15) Bond yields plunge and cash moves to safety. (4:08) Analysts see buying opportunity. (5:01) DeepSeek dropped a depth charge into Wall Street's waters today. But it didn't completely deep six the dip buyers. In a nutshell, hype around China's recently-introduced DeepSeek R1 self-learning AI model threatened many investor assumptions about the AI equity trade. Nomura strategist Charlie McElligott says, DeepSeek is shock-disruption-agent "where the sudden introduction of such a cheap-yet-completely viable AI tool which is self-guiding its own training and built via reinforcement learning .without standard controls (MAGNITUDES lower operational cost not requiring the massive semiconductor spend of the newest and most expensive chips) is threatening the prior assumptions on the AI ecosystem, with massive implications on Pricing Power, CAPEX/Spending trends and market Valuations of the current hierarchy of leadership in the space." Seeking Alpha tech editor Chris Ciaccia has more analysis, including whether DeepSeek's claim that is cost less than $6 million to develop can be trusted: "Artificial intelligence stocks dropped sharply this morning led by Nvidia (NASDAQ:NVDA), Broadcom (AVGO), Microsoft (MSFT), and the like, all down at least 6%. Nvidia and Broadcom are down 11% and 12% respectively. There's this concern that the model from this Chinese AI Lab DeepSeek ,that was released over the past few weeks, could significantly cut AI computing costs drastically. In the research paper, DeepSeek said its training costs were $5.6 million, whereas OpenAI's last model costs at least a $100 million So, that's what you're starting to see here, even though if there are concerns about that $5.6 million, whether that's actually true or not. Personally, I think this is a little bit of a buying opportunity because, like I said, there is a concern about whether that $5.6 million number is actually true. And if you actually read the research paper, DeepSeek talks about how it's a bunch of costs excluding several different things, including inferencing, and a couple other things that are related to it. And as I mentioned in the story that I wrote this morning, you've also seen the Chinese government come out with huge numbers requested for AI spending over the next few years. I think in their policy initiative for this year, they requested at least a $137 billion funding over the next 5 years. So, if they're requesting a $137 billion in funding, and DeepSeek is claiming that to train one it only costs $5.6 million, the two of those don't add up. There's something that's a little bit off. I would also wonder about the fact that this was released on the same day that president Trump was inaugurated. Maybe there might be some political aspects to it. It doesn't pass the smell test in my opinion. And I think that's why you started to see a bunch of sell-side analysts come out and say that this is a buying opportunity for AI socks." What's that mean for trading today? Growth is getting hammered, while defensive sectors like Consumer Staples (XLP) and Healthcare (XLV) are in the lead. The Nasdaq (COMP.IND) is tumbling as Nvidia (NVDA) and other AI-tied names see a 10% haircut. It's down -3.5% in volatile trading, but futures had been down more than 4%. The S&P (SP500) is faring much better, down -2%, while the Dow (DJI), which has been shunned missing out of the momentum trades and growth names, is back in fashion and fighting to stay in the green. McElligot says the lack of breadth in stocks from such a prolonged period of leadership via a small group of companies can now pose "an outright stability risk," with huge megacap weightings equaling a problem on an equity index level "thanks to crowded single-name ownership amongst institutions and retail." As expected in a risk-off move, cash is moving to the safety in bonds and rates are tumbling. The 10-year Treasury yield (US10Y) is also off lows, but still down back around 4.55%. Other markets are also feeling the DeepSeek contagion. Quantum computing stocks are selling off, infrastructure and nuclear stocks, expected to supply the massive power needs of data centers, are sinking and crypto is having a typical risk-off reaction with bitcoin (BTC-USD) at multi-day lows. On the flip side, ADRs of China's Aurora Mobile (JR) more than doubled after it added DeepSeek's AI model to its platform. And stocks overall saw some buying appetite after DeepSeek suddenly announced that it was restricting new registrations to those with China mainland phone numbers. The company said: "To ensure continued service, registration is temporarily limited to +86 phone numbers. Existing users can log in as usual. Thanks for your understanding and support." In more fodder for dip buyers, you heard Chris Ciaccia talk about bullish analyst calls, so we'll make this a special DeepSeek Wall Street Research Corner. Wedbush analysts called the selloff a "golden buying opportunity," adding while "the model is impressive, and it will have a ripple impact, the reality is that (Magnificent) 7 and US tech is focused on the (artificial general intelligence) endgame with all the infrastructure and ecosystem that China and especially DeepSeek cannot come close to in our view." TD Cowen analysts say they "see little reason DeepSeek's innovation should lead to a weaker near-term demand environment." Societe Generale strategist Manish Kabra cited his colleagues at Bernstein, who simply think that DeepSeek did not "build OpenAI for $5M." They agree DeepSeek's models look fantastic, but they do not think they are miracles and do think the resulting Twitterverse panic over the weekend seems overblown. And Oppenheimer analyst Edward Yang says the DeepSeek model could be "counterintuitively" bullish for semiconductor equipment companies by "lowering AI entry costs at the LLM/application layer; expanding the pool of economical buyers at the infrastructure layer beyond megacap hyperscalers; clarifying AI ROI through cheaper, more capable agents for broader applications; and intensifying competitive urgency in generative AI, spurring greater investment." Looking to the reaction from the Seeking Alpha analyst community, in the bear camp: James A. Kostohryz, investing group leader of Successful Portfolio Strategy, says: "The implications are clear. P/E ratios for most AI-related US tech firms will plummet. I expect that P/E ratios for the tech sector overall should contract by at least 30% over the course of the next 12 months." And Eugenio Catone says the "AI euphoria seems to have come to an end, and it is rather curious that all this is happening just before the release of the earnings reports. Better-than-expected results may alleviate this moment of panic, but I think this news is bound to have an impact over time." In the bull camp, Oakoff Investments, leader of the Beyond the Wall Investing Group, upgraded Nvidia to Buy "viewing the dip as a buying opportunity amid market panic." "DeepSeek's success, using Nvidia's H800 GPUs, highlights Nvidia's critical role in AI, suggesting increased future demand for its GPUs from U.S. tech firms," they said. For my part, I asked ChatGPT if DeepSeek would replace it and said - in a very company man sort of way - that the way ChatGPT interacts sets it apart.
[10]
Some Analysts See Monday's Selloff as a 'Golden' Buying Opportunity. Here's What They Like
Other analysts said they view Broadcom, Applied Materials, Marvell Technology, Palantir, AMD, and Micron Technology favorably as well. Monday's DeepSeek-driven selloff could be an opportunity to pick up beaten-down AI stocks, according to some Wall Street analysts. After an app from Chinese AI startup DeepSeek knocked one from OpenAI off the top of Apple's list of most-downloaded free U.S. apps, Nvidia (NVDA) and a wide range of other AI-related stocks tumbled, dragging the tech-heavy Nasdaq Composite down over 3%. Underpinning the declines: the concern that the performance of DeepSeek's model rivals its U.S. counterparts while requiring less computing power, raising questions about the market for and necessity of expensive AI technology. Some Wall Street watchers saw an opportunity to buy the dip. Wedbush analysts called it a "golden buying opportunity" to own Nvidia, along with Microsoft (MSFT), Alphabet (GOOGL), Palantir (PLTR), and other heavyweights of the American AI ecosystem. The analysts called DeepSeek's accomplishments "impressive," but suggested major U.S. companies are unlikely to use a Chinese startup like DeepSeek to launch their AI infrastructure, adding that "launching a competitive LLM model for consumer use cases is one thing... launching broader AI infrastructure is a whole other ballgame and nothing with DeepSeek makes us believe anything different." Bernstein analysts called the market reaction "overblown," and said they continue to view AI stocks in their coverage favorably, particularly Nvidia and Broadcom (AVGO). Bernstein maintained "outperform" ratings for both stocks, as well as Qualcomm (QCOM) and Applied Materials (AMAT). Citi analysts, who maintained a "buy" rating for Nvidia, said recent AI spending measures, including President Trump's announcement of a $500 billion AI infrastructure project called Stargate point to the continued need for advanced U.S. chips.
[12]
DeepSeek threat leading to overblown panic, buy the dips: Brokerages
DeepSeek: Global analysts Bernstein and UBS downplay panic over China's DeepSeek AI model disrupting the AI industry. They emphasize broader growth, innovation needs, and continued demand for AI infrastructure driven by global capex commitments and technological advancements.With the ongoing panic around China's low-cost artificial intelligence (AI) model DeepSeek, analysts from global brokerage firms Bernstein and UBS weighed in with their thoughts on the matter. While Bernstein advises investors to 'not buy the doomsday scenario', UBS states that this may 'not necessarily be a zero-sum game'. US tech shares took a hit, especially AI stocks which plunged as low as 20%, amid investor concerns that Chinese artificial intelligence startup DeepSeek may disrupt the AI growth story that has powered the US stock rally over the past two years. But is there a cause for panic? Is the new Chinese AI model going to dominate the AI industry? Analysts at UBS and Bernstein certainly disagree. Here's what they have to say: Bernstein put out their thoughts that the panic around DeepSeek has primarily been a combination of some factors: 1) a fundamental misunderstanding over the "$5M" number, 2) DeepSeek's deployment of smaller models "distilled" from the larger R1, and 3) DeepSeek's actual pricing to use the models, which is admittedly far below what OpenAI is asking. "Our own initial reaction does not include panic (far from it). If we acknowledge that DeepSeek may have reduced costs of achieving equivalent model performance by, say, 10x, we also note that current model cost trajectories are increasing by about that much every year anyway (the infamous "scaling laws...") which can't continue forever," said the global brokerage firm in its note. Bernstein clarifies that the widely cited $5 million cost for DeepSeek's AI model development seems misleading. The firm explains that DeepSeek comprises two model families: V3, an MoE model requiring fewer compute resources for comparable performance; and R1, a reinforcement learning-enhanced version of V3 achieving strong reasoning capabilities. While the $5 million figure reflects a hypothetical GPU rental cost for V3's training (using 2048 NVIDIA H800 GPUs for 2 months), Bernstein argues it omits substantial prior research and development expenses, as well as the undisclosed (but likely significant) resources invested in developing the R1 model. The foreign brokerage firm also stated it believes that we NEED innovations like this (MoE, distillation, mixed precision etc) if AI is to continue progressing. Additionally, it should be noted that it is unlikely DeepSeek's innovations are entirely novel, given the numerous top AI researchers at other global labs (though their specific methods remain undisclosed). Further, the analysts said that the recent DeepSeek news coincides with several significant developments: Meta's substantial capex increase, the Stargate announcement, and China's massive $140B AI spending plan. These events reinforce the continued, and growing, demand for AI chips, reaffirming that DeepSeek is not the end of the AI world. Also read: DeepSeek sparks global AI selloff, Nvidia losses about $593 billion of value Analysts at UBS believe that AI is here to stay and if anything, DeepSeek only reinforces the belief that the potential success of DeepSeek does not derail the AI growth story. "The overall market can grow, with potentially lower costs accelerating AI adoption across industries and further improving productivity gains," UBS analysts believe, stating that even if DeepSeek's model will be the way to go for the broader AI industry, it is not necessarily a zero-sum game. Drawing a parallel with the mobile phone industry, UBS suggests that while more cost-effective smartphones led to broader global adoption, established leaders maintained their dominance in the high-end segment. Similarly, the established large language models (LLMs) may cater to the premium knowledge worker segment (10-20%) while DeepSeek's open-source model could drive wider adoption. UBS too, quoted Meta's substantial capex increase in its AI infrastructure, stating that the capex commitments from leading US tech companies remain intact. "In addition, big tech's AI spending includes development of other models that involve the generation of audio and video, which appears to be out of scope for DeepSeek's current application. We believe sufficient capex is required to continue to produce innovative AI models as the technology advances," the UBS report added. Also read: Anant Raj shares nosedive nearly 20% amid global fears of low-cost AI models As technology and competition evolve, it only makes sense that the investors shift focus between enabling, intelligence, and application layers. While value may eventually migrate from infrastructure developers (enabling layer) towards users (application layer), premature conclusions are cautioned, according to UBS. Lower costs could boost demand for the enabling layer through wider adoption. Q4 earnings season should provide clarity on Deepseek, big tech capex, and AI monetization. Further, the US consumer sentiment is declining due to unemployment and inflation concerns, exacerbated by tariff uncertainty. However, a resilient labor market (as seen in December's employment data) and the potentially limited impact of targeted tariffs on overall inflation offer a counterpoint. These volatile times make UBS believe that underallocated investors can consider structured strategies to buy equities on dips or use pockets of volatility as a means of generating diverse sources of income. DeepSeek's emergence reinforces the staying power of AI, but also highlights the risks of concentrated or passive investment strategies.An active, diversified approach is recommended for AI exposure. However, upcoming big tech earnings will provide more insights into value creation trends. (Disclaimer: Recommendations, suggestions, views and opinions given by the experts are their own. These do not represent the views of The Economic Times)
[13]
Why DeepSeek AI could be bullish for Nvidia, Broadcom, Microsoft and other AI stocks
Nvidia stock and AI stocks are selling off early Monday morning, but is it game over for the AI market? In today's video, I discuss the recent updates impacting Nvidia (NASDAQ: NVDA) and other AI stocks after the volatility created by DeepSeek AI. To learn more, check out the short video, consider subscribing, and click the special offer link below. *Stock prices used were the after-market prices of January 24, 2025. The video was published on January 27, 2025. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Jose Najarro has positions in Alphabet, Meta Platforms, Microsoft, Nvidia, and Tesla. The Motley Fool has positions in and recommends Alphabet, Amazon, Meta Platforms, Microsoft, Nvidia, and Tesla. The Motley Fool recommends Broadcom and recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy. Jose Najarro is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe through their link they will earn some extra money that supports their channel. Their opinions remain their own and are unaffected by The Motley Fool. The Motley Fool is a USA TODAY content partner offering financial news, analysis and commentary designed to help people take control of their financial lives. Its content is produced independently of USA TODAY. Don't miss this second chance at a potentially lucrative opportunity Offer from the Motley Fool: Ever feel like you missed the boat in buying the most successful stocks? Then you'll want to hear this. On rare occasions, our expert team of analysts issues a "Double Down" stock recommendation for companies that they think are about to pop. If you're worried you've already missed your chance to invest, now is the best time to buy before it's too late. And the numbers speak for themselves: Right now, we're issuing "Double Down" alerts for three incredible companies, and there may not be another chance like this anytime soon.
[0]
Nvidia: Do You Have to Sell Shares Now With DeepSeek's Rise? | Investing.com UK
China's AI innovation DeepSeek is shaking Nvidia's (NASDAQ:NVDA) stock and Wall Street. At the same time, Donald Trump is calling on the US to close ranks on technology policy, which is once again bringing the geopolitical rivalry between the US and China into focus. Shortly after Donald Trump took office as US President, the Chinese chatbot DeepSeek triggered a massive sell-off in Nvidia shares, causing the company's market capitalisation to fall by around $600 billion. The Nasdaq index also lost several percentage points before the situation stabilised slightly. According to experts, DeepSeek, a Chinese AI solution, could be trained with significantly less computing power than previous models. This poses a potential threat to the market for Nvidia chips. Other technology and energy companies also came under pressure due to this development. Trump called the events a 'wake-up call' for American industry. While he viewed Chinese innovation as positive for the further development of AI technologies, he used the incident to underpin his plans for massive investments in US AI infrastructure. He announced $500 billion for the construction of new data centres to put the US at the forefront in this field. DeepSeek's success calls into question the US efforts to secure its own dominance in the global AI market without major international competition. The Chinese push underscores the tensions between the two nations, which have been strained for years by trade disputes and geopolitical conflicts, particularly in the Pacific region and over Taiwan. DeepSeek was founded in 2023 by Chinese hedge fund manager Liang Wenfeng and uses open-source models. However, the chatbot is subject to state censorship and hides sensitive topics, such as the 1989 Tiananmen Square massacre. Nevertheless, DeepSeek reached number one in the US App Store for the iPhone, which is a remarkable development given the US scepticism towards Chinese technologies such as TikTok. We should not make false comparisons. We are comparing OpenAI's ChatGPT with DeepSeek. Both are AI language models and not chipmakers. Just because ChatGPT runs on Nvidia infrastructure doesn't mean that DeepSeek has better or equivalent chipsets. The language models require software in addition to hardware. Perhaps DeepSeek's software is better than OpenAI's. In that respect, the point goes to DeepSeek. Whether and what long-term effects this will have on Nvidia, we cannot see at all at the moment. We are sceptical because we already communicated the sale of the Nvidia share to our customers on 13 January 2025. This was structurally inevitable and we have been able to rely on it for years. Here is an excerpt of it: And as we can see, we assume a trend reversal within the purple box between $125.45 and $111.44. And so we can already provide an answer to the question above: No, we will definitely not be selling Nvidia shares on a large scale now. Since the beginning of the year, we have seen a great deal of uncertainty among investors. Many do not know what to do. Stocks have risen sharply in recent years. The fear of a crash is growing daily. Of course, we are not always right, but since 2016, we have achieved an above-average return of well over 20% every year. In this respect, we are certainly at least the best second opinion one could imagine. The dispute between the US and China over technological dominance has far-reaching consequences. This conflict has been played out through punitive tariffs and investments in strategic future technologies since 2018. While the US is trying to reduce its dependence on Chinese products, China is further expanding its own technological independence. Trump responded to the latest developments by announcing tariffs on imported computer chips, semiconductors and pharmaceutical products in an effort to shift production of these goods back to the US. DeepSeek has overtaken ChatGPT in terms of downloads in a very short time. OpenAI CEO Sam Altman welcomed the new competition and described the model as impressive. Nevertheless, the development shows that China's strategy in the field of artificial intelligence could be successful. According to think tanks, the competition between the U.S. and China will significantly influence the geopolitical order of the coming decades. However, both countries also face internal challenges: the U.S. is struggling with political instability, while China is confronted with an aging population. Disclaimer/Risk warning: The information provided here is for informational purposes only and does not constitute a recommendation to buy or sell. It should not be understood as an explicit or implicit assurance of a particular price development of the financial instruments mentioned or as a call to action. The purchase of securities involves risks that may lead to the total loss of the capital invested. The information provided does not replace expert investment advice tailored to individual needs. No liability or guarantee is assumed, either explicitly or implicitly, for the timeliness, accuracy, appropriateness or completeness of the information provided, nor for any financial losses. These are expressly not financial analyses, but journalistic texts. Readers who make investment decisions or carry out transactions based on the information provided here do so entirely at their own risk. The authors may hold securities of the companies/securities/shares discussed at the time of publication and therefore a conflict of interest may exist.
[11]
Has DeepSeek Ended The AI Run Led By NVIDIA? A Deep Dive Into What's Next For The Magnificent Seven
Well, it's not a great day for AI investors, and NVIDIA in particular, since the Chinese firm DeepSeek has managed to disrupt industry norms with its newest R1 AI model, which is said to change the concept of model training and the resources involved behind it. If you have been living under the rocks or still haven't understood why the "AI markets" are panicking right now, this post is definitely for you. So, China has managed to launch an AI model that is said to be trained using significantly lower financial resources, which we'll talk about later, and this has stirred the debate on the fact whether the "AI supercycle" witnessed in the past year is overhyped or rather not worth the money poured into it. DeepSeek R1 has managed to compete with some of the top-end LLMs out there, with an "alleged" training cost that might seem shocking. Let's start with what DeepSeek R1 is, and how it differs from the others. The R1 is a one-of-a-kind open-source LLM model that is said to primarily rely on an implementation that hasn't been done by any other alternative out there. While we won't go much into technicals since that would make the post boring, but the important point to note here is that the R1 relies on a "Chain of Thought" process, which means that when a prompt is given to the AI model, it demonstrates the steps and conclusions it has made to reach to the final answer, that way, users can diagnose the part where the LLM had made a mistake in the first place. Another interesting fact about DeepSeek R1 is the use of "Reinforcement Learning" to achieve an outcome. It is a type of machine learning where the model interacts with the environment to make its decision through a "reward-based process." When a desirable outcome is reached, the model makes sure to opt for those where the reward is maximum, and in this way, it is certain that the desirable conclusion will be achieved. Whereas, with GPT's o1, the core focus is on supervised learning methods, which involve training the model on massive datasets of text and code, which ultimately requires more financial resources. Speaking of financial resources, there's a lot of misconception in the markets around DeepSeek's training costs, since the rumored "$5.6 million" figure is just the cost of running the final model, not the total cost. Since China is restricted from accessing cutting-edge AI computing hardware, it won't be wise of DeepSeek to reveal its AI arsenal, which is why the expert perception is that DeepSeek has power equivalent to its competitors, but undisclosed for now. Compared to OpenAI's GPT-o1, the R1 manages to be around five times cheaper for input and output tokens, which is why the market is taking this development with uncertainty and a surprise, but there's a pretty interesting touch to it, which we'll talk about next, and how people shouldn't panic around DeepSeek's accomplishment. NVIDIA has generated gigantic revenue over the past few quarters by selling AI compute resources, and mainstream companies in the Magnificent 7, including OpenAI, have access to superior technology compared to DeepSeek. Given that DeepSeek has managed to train R1 with confined computing, imagine what the companies can bring to the markets by having potent computing power, which makes this situation much more optimistic towards the future of the AI markets. There's no competition to NVIDIA's CUDA and the surrounding ecosystem, and it's safe to say that in the world where AI is emerging as a growing technology, we are just at the start. DeepSeek's implementation doesn't mark the end of the AI hype. Rather, it shows us the untapped potential of the technology, although the markets aren't taking this development with much optimism, as Team Green has managed to shave $300 billion+ from its market cap after DeepSeek's R1. But, we expect the dust to settle, once people realize the positive outcome of the situation. Moreover, this will prompt companies like Meta, Google and Amazon to speed up their respective AI solutions, and as a Cantor Fitzgerald analyst says, DeepSeek's achievement should rather turn us more bullish towards NVIDIA and the future of AI.
[14]
Nvidia Stock Slides Overnight On Robinhood As China's DeepSeek Sparks Doubts Over GPU Spending Frenzy -- Chamath Palihapitiya Highlights Report Indicating AI Holy Grail Cracked - Alphabet (NASDAQ:GOOG), Alphabet (NASDAQ:GOOGL)
Nvidia Corporation NVDA shares fell notably in overnight Robinhood trading on Sunday as China's DeepSeek unveiled an AI model that reportedly offers cutting-edge performance at a fraction of the cost. This development has sparked doubts about the need for massive GPU investments. What Happened: DeepSeek, backed by Chinese quant firm High-Flyer, launched its R1 language model (LLM) as open source last week. It also published a technical report asserting that advanced AI systems can be developed on substantially smaller budgets. What stands out is DeepSeek's ability to achieve competitive results with less funding and technology compared to industry leaders like OpenAI, Meta Platforms, Inc. META, Microsoft Corporation MSFT, and Alphabet Inc. GOOG GOOGL. DeepSeek reportedly used roughly 50,000 Nvidia H100 GPUs. However, the extent of this hardware usage remains unclear. See Also: Jeff Bezos-Backed Perplexity Revises TikTok Merger Deal, Proposes Giving US Government 50% Stake: Report Over the weekend, DeepSeek's R1 model became a sensation on social media with several tech experts reacting to the company and its model. Here are some reactions: Billionaire investor Chamath Palihapitiya took to X, formerly Twitter, and highlighted a report written by Jeffrey Emanuel, the founder of YouTube transcript optimizer. He underscored a portion of the report that detailed how DeepSeek essentially cracked one of the holy grails of AI: step-by-step reasoning without massive datasets. 28-year-old Alexandr Wang, who is the CEO of Scale AI and has made history as the youngest self-made billionaire in the world, also took to X and said, "DeepSeek is a wake-up call for America." Wang stated that the U.S. must focus on out-innovating and accelerating progress while strengthening chip export controls to maintain its lead. Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. AI engineer Shubham Sahoo highlighted that DeepSeek R1 is open-source and 96.4% more affordable than OpenAI's o1 model, offering comparable performance. Perplexity CEO Aravind Srinivas congratulated DeepSeek on reaching #1 on the App Store. "For a while, it wasn't clear who would beat ChatGPT for the first time. The best we could manage was #8, a year ago," he posted on X. Jasmine Sun, former Substack product manager, commented on DeepSeek's ability to outperform Western counterparts in writing tasks. Price Action: On Friday, Nvidia shares dropped 3.12% to close at $142.62, with an additional decline of 0.42% in after-hours trading, according to Benzinga Pro data. Despite the decline, NVDA has gained 3.12% year-to-date and an impressive 133.69% over the past 12 months. Check out more of Benzinga's Consumer Tech coverage by following this link. Photo Courtesy: KhunkornStudio via Shutterstock.com Read Next: Stephen King Slams Elon Musk's H-1B Visa Stance: 'Musk Thinks Smart Immigrants Can Stay - The Rest of You Washing Dishes ... Get Lost' Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[15]
Cantor Fitzgerald On NVIDIA: The DeepSeek "Announcement Is Actually Very Bullish With AGI Seemingly Closer To Reality And Jevons Paradox Almost Certainly Leading To The AI Industry Wanting More Compute, Not Less"
This is not investment advice. The author has no position in any of the stocks mentioned. Wccftech.com has a disclosure and ethics policy. With NVIDIA shedding nearly half a trillion dollars in market capitalization today as analysts and investors alike begin to question the demand paradigm for hyperscalers, especially given the phenomenal efficiency gains demonstrated by DeepSeek's R1 AI model, Wall Street analysts are coming outbin droves with broadly reassuring takes on the GPU giant's prospects. As we've noted in a previous post, China's DeepSeek recently shocked the global tech industry by training its R1 model at a cost of just around $6 million, which is roughly 1/50th the cost of comparable LLMs from the US and the EU. The model, in some respects, is quite superior to OpenAI's latest cutting-edge offering, the o1 model. What's more, the R1's operating costs are just 3 percent of what OpenAI typically charges for compute-intensive outputs. DeepSeek was able to achieve these phenomenal efficiency gains by implementing a few novel ideas: This brings us to the crux of the matter. At first glance, DeepSeek's R1 model appears to a dark cloud for NVIDIA, questioning the need for hundreds of thousands of top-notch GPUs when an effective model can be trained with just 2000 H800s, as was the case with R1. However, a Cantor Fitzgerald analyst disagrees. Cantor Fitzgerald's investment note concedes at the outset that DeepSeek's R1 model has generated "great angst as to the impact for compute demand, and therefore, fears of peak spending on GPUs." However, the investment bank thinks this view is "farthest from the truth." We think this view is farthest from the truth and that the announcement is actually very bullish with AGI seemingly closer to reality and Jevons Paradox almost certainly leading to the AI industry wanting more compute, not less. Accordingly, the investment bank declares that it would be "buyers of NVIDIA shares on any potential weakness." For the benefit of those who might not be aware, Jevons paradox posits that increased efficiency in utilizing a resource can lead to a greater consumption of that resource. Cantor Fitzgerald has applied the same reasoning towards DeepSeek's R1 model and the democratization of the AI moat. Interestingly, analysts at Citi and Bernstein have adopted a similar bullish view on NVIDIA in light of DeepSeek's advances. However, Raymond James analysts believe the development bodes ill for "large GPU clusters."
[16]
Here's what the sellside is saying about DeepSeek
For anyone wanting to train an LLM on analyst responses for DeepSeek, the Temu of ChatGPTs, this post is a one-stop shop. We've grabbed all relevant sellside emails in our inbox and copy-pasted them with minimal intervention. DeepSeek's a two-years-old, Hangzhou-based spinout of a Zhejiang University company that used machine learning to trade equities. Its stated goal is to make an artificial general intelligence for the fun of it, not for the money. There's a good interview on ChinaTalk with founder Liang Wenfeng. Here's Mizuho's Jordan Rochester to take up the story . . . [O]n Jan 20, [DeepSeek] released an open source model (DeepSeek-R1) that beats the industry's leading models on some math and reasoning benchmarks including capability, cost, openness etc. Deepseek app has topped the free APP download rankings in Apple's app stores in China and the United States, surpassing ChatGPT in the U.S. download list. What really stood out? DeepSeek said it took 2 months and less than $6m to develop the model - building on already existing technology and leveraging existing models. In comparison, Open AI is spending more than $5 billion a year. Apparently DeepSeek bought 10,000 NVIDIA chips whereas Hyperscalers have bought many multiples of this figure. It fundamentally breaks the AI Capex narrative if true. Sounds bad, but why? Here's Jefferies' Graham Hunt etc al: With DeepSeek delivering performance comparable to GPT-40 for a fraction of the computing power, there are potential negative implications for the builders, as pressure on Al players to justify ever increasing capex plans could ultimately lead to a lower trajectory for data center revenue and profit growth. The DeepSeek R1 model is free to play with here, and does all the usual stuff like summarising research papers in iambic pentameter and getting logic problems wrong. The R1-Zero model, DeepSeek says, was trained entirely without supervised fine tuning. Here's Damindu Jayaweera and team at Peel Hunt on the detail. Firstly, it was trained in under 3 million GPU hours, which equates to just over $5m training cost. For context, analysts estimate Meta's last major AI model cost $60-70m to train. Secondly, we have seen people running the full DeepSeek model on commodity Mac hardware in a usable manner, confirming its inferencing efficiency (using as opposed to training). We believe it will not be long before we see Raspberry Pi units running cutdown versions of DeepSeek. This efficiency translates into hosted versions of this model costing just 5% of the equivalent OpenAI price. Lastly, it is being released under the MIT License, a permissive software license that allows near-unlimited freedoms, including modifying it for proprietary commercial use Deepseek's not an unanticipated threat to the OpenAI Industrial Complex. Even The Economist had spotted it months ago, and industry mags like SemiAnalysis have been talking for ages about AI's commoditisation risk. Here's Joshua Meyers, a specialist sales person at JPMorgan: It's unclear to what extent DeepSeek is leveraging High-Flyer's ~50k hopper GPUs (similar in size to the cluster on which OpenAI is believed to be training GPT-5), but what seems liklely is that they're dramatically reducing costs (inference costs for their V2 model, for example, are claimed to be 1/7 that of GPT-4 Turbo). Their subversive (though not new) claim - that started to hit the US AI names this week - is that "more investments do not equal more innovation." Liang: "Right now I don't see any new approaches, but big firms do not have a clear upper hand. Big firms have existing customers, but their cash-flow businesses are also their burden, and this makes them vulnerable to disruption at any time." And when asked about the fact that GPT5 has still not been released: "OpenAI is not a god, they won't necessarily always be at the forefront." Oof. Back to Mizuho: Why this comes at a painful moment? This is happening after we just saw a Texas Hold'em 'All In" push of the chips with respect to the Stargate Announcement (~$500B by 2028E) and Meta taking up CAPEX officially to the range of $60-$65B to scale up Llama and of course MSFT's $80B announcement.....The markets were literally trying to model just Stargate's stated demand for ~2mln Unis from NVDA when their total production is only 6mn.....(Nvidia's European trading is down 9% this morning, Softbank was down 7%). Markets are now wondering if this is a AI bubble popping moment for markets or not (i.e. a dot-com bubble for Cisco). Nvidia Is the largest individual company weight of S&P500 at 7%. And Jefferies again. 1) We see at least two potential industry strategies. The emergence of more efficient training models out of China, which have been driven to innovate due to chip supply constraints, is likely to further intensify the race for AI dominance between the US and China. The key question for the data center builders, is whether it continues to be a "Build at all Costs" strategy with accelerated model improvements, or whether focus now shifts towards higher capital efficiency, putting pressure on power demand and capex budgets from the major AI players. Near term the market will assume the latter. 2) Derating risk near term, earnings less impacted. Although data center exposed names are vulnerable to derating on sentiment, there is no immediate impact on earnings for our coverage. Any changes to capex plans apply with a lag effect given duration (>12M) and exposure in orderbooks (~10% for HOT). We see limited risk of alterations or cancellations to existing orders and expect at this stage a shift in expectations to higher ROI on existing investments driven by more efficient models. Overall, we remain bullish on the sector where scale leaders benefit from a widening moat and higher pricing power. Though it's the Chinese. People are suspicious. Here's Citi's Atif Malik: While DeepSeek's achievement could be groundbreaking, we question the notion that its feats were done without the use of advanced GPUs to fine tune it and/or build the underlying LLMs the final model is based on through the Distillation technique. While the dominance of the US companies on the most advanced AI models could be potentially challenged, that said, we estimate that in an inevitably more restrictive environment, US' access to more advanced chips is an advantage. Thus, we don't expect leading AI companies would move away from more advanced GPUs which provide more attractive $/TFLOPs at scale. We see the recent AI capex announcements like Stargate as a nod to the need for advanced chips. And Mayers at JPMorgan: Above all, much is made of DeepSeek's research papers, and of their models' efficiency. It's unclear to what extent DeepSeek is leveraging High-Flyer's ~50k hopper GPUs (similar in size to the cluster on which OpenAI is believed to be training GPT-5), but what seems liklely is that they're dramatically reducing costs (inference costs for their V2 model, for example, are claimed to be 1/7 that of GPT-4 Turbo). Spooky stuff for the Mag7, of course, but is that a good reason for a wiider market selloff? Cheap Chinese AI means more productivity benefits, lower build costs and an acceleration towards the Andreesen Theory of Cornucopia so maybe . . . good news in the long run? JPMorgan's Meyers again: This strikes me not about the end of scaling or about there not being a need for more compute, or that the one who puts in the most capital won't still win (remember, the other big thing that happened yesterday was that Mark Zuckerberg boosted AI capex materially). Rather, it seems to be about export bans forcing competitors across the Pacific to drive efficiency: "DeepSeek V2 was able to achieve incredible training efficiency with better model performance than other open models at 1/5th the compute of Meta's Llama 3 70B. For those keeping track, DeepSeek V2 training required 1/20th the flops of GPT-4 while not being so far off in performance." If DeepSeek can reduce the cost of inference, then others will have to as well, and demand will hopefully more than make up for that over time. That's also the view of semis analyst Tetsuya Wadaki at Morgan Stanley, the most AI-enthusiastic of the big banks. We have not confirmed the veracity of these reports, but if they are accurate, and advanced LLM are indeed able to be developed for a fraction of previous investment, we could see generative AI run eventually on smaller and smaller computers (downsizing from supercomputers to workstations, office computers, and finally personal computers) and the SPE industry could benefit from the accompanying increase in demand for related products (chips and SPE) as demand for generative AI spreads. And Peel Hunt again: We believe the impact of those advantages will be twofold. In the medium to longer term, we expect LLM infrastructure to go the way of the telco infrastructure and become a 'commodity technology'. The financial impact on those deploying AI capex today depends on regulatory interference - which had a major impact on Telcos. If we think of AI as another 'tech infrastructure layer', like the internet, the mobile, and the cloud, in theory the beneficiaries should be companies that leverage that infrastructure. While we think of Amazon, Google, and Microsoft as cloud infrastructure, this emerged out of the need to support their existing business models: e-commerce, advertising and information-worker software. The LLM infrastructure is different in that, like the railroads and telco infrastructure, these are being built ahead of true product/market fit. We'll keep adding to this post as the emails keep landing.
[17]
Opinion: Why DeepSeek AI Could be Bullish for Nvidia, Broadcom, Microsoft, and other AI Stocks
Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Jose Najarro has positions in Alphabet, Meta Platforms, Microsoft, Nvidia, and Tesla. The Motley Fool has positions in and recommends Alphabet, Amazon, Meta Platforms, Microsoft, Nvidia, and Tesla. The Motley Fool recommends Broadcom and recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy. Jose Najarro is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe through their link they will earn some extra money that supports their channel. Their opinions remain their own and are unaffected by The Motley Fool.
Share
Copy Link
Chinese AI startup DeepSeek's cost-efficient AI model sparks debate on Nvidia's future in the AI chip market, causing stock volatility and raising questions about AI infrastructure investments.
Chinese AI startup DeepSeek has sent shockwaves through the tech industry with its latest AI model, R1, potentially threatening Nvidia's stronghold in the AI chip market. This development has sparked a debate about the future of AI infrastructure investments and the efficiency of current AI development approaches [1]2.
DeepSeek, founded in 2023 by hedge fund manager Liang Wenfeng, has developed an AI model that reportedly matches or exceeds the performance of leading models like OpenAI's, but at a fraction of the cost. The company claims to have trained its R1 model using just $5.6 million worth of GPU hours, significantly less than the hundreds of millions reportedly spent by major U.S. tech companies 34.
The news of DeepSeek's breakthrough has had immediate repercussions:
DeepSeek's efficiency is attributed to several factors:
While some analysts view DeepSeek's achievement as a potential threat to Nvidia's business model, others remain skeptical:
The success of DeepSeek has reignited discussions about the U.S.-China tech rivalry:
Despite the current market volatility, many analysts remain optimistic about Nvidia's long-term prospects:
As the AI landscape continues to evolve, the impact of DeepSeek's breakthrough on Nvidia and the broader AI industry remains to be seen, with both challenges and opportunities on the horizon.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
13 hrs ago
9 Sources
Technology
13 hrs ago
Google's Made by Google 2025 event showcases the Pixel 10 series, featuring advanced AI capabilities, improved hardware, and ecosystem integrations. The launch includes new smartphones, wearables, and AI-driven features, positioning Google as a strong competitor in the premium device market.
4 Sources
Technology
13 hrs ago
4 Sources
Technology
13 hrs ago
Palo Alto Networks reports impressive Q4 results and forecasts robust growth for fiscal 2026, driven by AI-powered cybersecurity solutions and the strategic acquisition of CyberArk.
6 Sources
Technology
13 hrs ago
6 Sources
Technology
13 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
21 hrs ago
6 Sources
Technology
21 hrs ago
President Trump's plan to deregulate AI development in the US faces a significant challenge from the European Union's comprehensive AI regulations, which could influence global standards and affect American tech companies' operations worldwide.
2 Sources
Policy
5 hrs ago
2 Sources
Policy
5 hrs ago