Curated by THEOUTPOST
On Sat, 1 Mar, 4:02 PM UTC
10 Sources
[1]
DeepSeek's 'Theoretical' Profit Margins Are Just That
The Chinese artificial intelligence startup that rocked global markets earlier this year with its low-cost and high-performance AI models has outlined a potential path to major profitability. The transparency is laudable -- even if the operating word is "potential." Over the weekend, DeepSeek shared an eye-popping "theoretical" cost-profit margin of 545%. The revelation came as the closing update in the company's weeklong show that gave the world an exceedingly rare look under the hood of an AI firm. On Saturday, it published a blogpost outlining its potential profit margins when looking at a 24-hour period of inferencing costs (essentially, the computing power and related real-time operating expenses) compared to user requests for its two latest models, V3 and R1.
[2]
DeepSeek reveals theoretical margin on its AI models is 545%
The 20-month-old startup that rattled Silicon Valley with its innovative and inexpensive approach to building AI models, said on X its V3 and R1 models' cost of inferencing to sales during a 24-hour-period on the last day of February put profit margins at 545%.Chinese artificial intelligence phenomenon DeepSeek revealed some financial numbers on Saturday, saying its "theoretical" profit margin could be more than five times costs, peeling back a layer of the secrecy that shrouds business models in the AI industry. The 20-month-old startup that rattled Silicon Valley with its innovative and inexpensive approach to building AI models, said on X its V3 and R1 models' cost of inferencing to sales during a 24-hour-period on the last day of February put profit margins at 545%. Inferencing refers to the computing power, electricity, data storage and other resources needed to make AI models work in real time. However, DeepSeek added a disclaimer in details it provided on GitHub, saying its actual revenues are substantially lower for various reasons, including the fact that only a small set of its services are monetised and it offers discounts during off-peak hours. Nor do the costs factor in all the R&D and training expenses for building its models. While the eye-popping profit margins are therefore hypothetical, the reveal comes at a time when profitability of AI startups and their models is a hot topic among technology investors. Companies from OpenAI Inc. to Anthropic PBC are experimenting with various revenue models, from subscription-based to charging for usage to collecting licensing fees, as they race to build ever more sophisticated AI products. But investors are questioning these business models and their return on investment, opening a debate on the feasibility of reaching profitability any day soon. The Hangzhou-based startup said Saturday on X that its online service had a "cost profit margin of 545%" and gave an overview of its operations including how it optimised computing power by balancing load -- that is managing traffic so that work is evenly distributed between multiple servers and data centers. DeepSeek said it innovated to optimise the amount of data processed by the AI model in a given time period, and managed latency -- the wait time between a user submitting a query and receiving the answer. In a series of unusual steps beginning early this week, the startup, which has espoused open-source AI, surprised many in the industry by sharing some key innovations and data underpinning its models, in contrast to the proprietary approach of its biggest US rivals like OpenAI.
[3]
DeepSeek claims 'theoretical' profit margins of 545% | TechCrunch
Chinese AI startup DeepSeek recently declared that its AI models could be very profitable -- with some asterisks. In a post on X, DeepSeek boasted that its online services have a "cost profit margin" of 545%. However, that margin is calculated based on "theoretical income." It discussed these numbers in more detail at the end of a longer GitHub post outlining its approach to achieving "higher throughput and lower latency." The company wrote that when it looks at usage of its V3 and R1 models during a 24-hour period, if that usage had all been billed using R1 pricing, DeepSeek would already have $562,027 in daily revenue. Meanwhile, the cost of leasing the necessary GPUs (graphics processing units) would have been just $87,072. The company admitted that its actual revenue is "substantially lower" for a variety of reasons, like nighttime discounts, lower pricing for V3, and the fact that "only a subset of services are monetized," with web and app accessing remaining free. Of course, if the app and website weren't free, and if other discounts weren't available, usage would presumably be much lower. So these calculations seem to be highly speculative -- more a gesture towards potential future profit margins than a real snapshot of DeepSeek's bottom line right now. But the company is sharing these numbers amidst broader debates about AI's cost and potential profitability. DeepSeek leapt into the spotlight in January, with a new model that supposedly matched OpenAI's o1 on certain benchmarks, despite being developed at a much lower cost, and in the face of U.S. trade restrictions that prevent Chinese companies from accessing the most powerful chips. Tech stocks tumbled and analysts raised questions about AI spending. DeepSeek's tech didn't just rattle Wall Street. Its app briefly displaced OpenAI's ChatGPT at the top of Apple's App Store -- though it's subsequently fallen off the general rankings and is currently ranked #6 in productivity, behind ChatGPT, Grok, and Google Gemini.
[4]
China's DeepSeek claims theoretical cost-profit ratio of 545% per day
BEIJING (Reuters) - Chinese AI startup DeepSeek on Saturday disclosed some cost and revenue data related to its hit V3 and R1 models, claiming a theoretical cost-profit ratio of up to 545% per day, though it cautioned that actual revenue would be significantly lower. This marks the first time the Hangzhou-based company has revealed any information about its profit margins from less computationally intensive "inference" tasks, the stage after training that involves trained AI models making predictions or performing tasks, such as through chatbots. The revelation could further rattle AI stocks outside China that plunged in January after web and app chatbots powered by its R1 and V3 models surged in popularity worldwide. The sell-off was partly caused by DeepSeek's claims that it spent less than $6 million on chips used to train the model, much less than what U.S. rivals like OpenAI have spent. The chips DeepSeek claims it used, Nvidia's H800, are also much less powerful than what OpenAI and other U.S. AI firms have access to, making investors question even further U.S. AI firms' pledges to spend billions of dollars on cutting-edge chips. DeepSeek said in a GitHub post published on Saturday that assuming the cost of renting one H800 chip is $2 per hour, the total daily inference cost for its V3 and R1 models is $87,072. In contrast, the theoretical daily revenue generated by these models is $562,027, leading to a cost-profit ratio of 545%. In a year this would add up to just over $200 million in revenue. However, the firm added that its "actual revenue is substantially lower" because the cost of using its V3 model is lower than the R1 model, only some services are monetized as web and app access remain free, and developers pay less during off-peak hours. (Reporting by Eduardo Baptista; Editing by Daren Butler)
[5]
China's DeepSeek claims theoretical cost-profit ratio of 545% per day
Chinese AI startup DeepSeek on Saturday disclosed some cost and revenue data related to its hit V3 and R1 models, claiming a theoretical cost-profit ratio of up to 545% per day, though it cautioned that actual revenue would be significantly lower. This marks the first time the Hangzhou-based company has revealed any information about its profit margins from less computationally intensive "inference" tasks, the stage after training that involves trained AI models making predictions or performing tasks, such as through chatbots. The revelation could further rattle AI stocks outside China that plunged in January after web and app chatbots powered by its R1 and V3 models surged in popularity worldwide. The sell-off was partly caused by DeepSeek's claims that it spent less than $6 million on chips used to train the model, much less than what U.S. rivals like OpenAI have spent. The chips DeepSeek claims it used, Nvidia's H800, are also much less powerful than what OpenAI and other U.S. AI firms have access to, making investors question even further U.S. AI firms' pledges to spend billions of dollars on cutting-edge chips. DeepSeek said in a GitHub post published on Saturday that assuming the cost of renting one H800 chip is $2 per hour, the total daily inference cost for its V3 and R1 models is $87,072. In contrast, the theoretical daily revenue generated by these models is $562,027, leading to a cost-profit ratio of 545%. In a year this would add up to just over $200 million in revenue. However, the firm added that its "actual revenue is substantially lower" because the cost of using its V3 model is lower than the R1 model, only some services are monetized as web and app access remain free, and developers pay less during off-peak hours.
[6]
China's DeepSeek claims theoretical cost-profit ratio of 545% per day
BEIJING, March 1 (Reuters) - Chinese AI startup DeepSeek on Saturday disclosed some cost and revenue data related to its hit V3 and R1 models, claiming a theoretical cost-profit ratio of up to 545% per day, though it cautioned that actual revenue would be significantly lower. This marks the first time the Hangzhou-based company has revealed any information about its profit margins from less computationally intensive "inference" tasks, the stage after training that involves trained AI models making predictions or performing tasks, such as through chatbots. The revelation could further rattle AI stocks outside China that plunged in January after web and app chatbots powered by its R1 and V3 models surged in popularity worldwide. The sell-off was partly caused by DeepSeek's claims that it spent less than $6 million on chips used to train the model, much less than what U.S. rivals like OpenAI have spent. The chips DeepSeek claims it used, Nvidia's H800, are also much less powerful than what OpenAI and other U.S. AI firms have access to, making investors question even further U.S. AI firms' pledges to spend billions of dollars on cutting-edge chips. DeepSeek said in a GitHub post published on Saturday that assuming the cost of renting one H800 chip is $2 per hour, the total daily inference cost for its V3 and R1 models is $87,072. In contrast, the theoretical daily revenue generated by these models is $562,027, leading to a cost-profit ratio of 545%. In a year this would add up to just over $200 million in revenue. However, the firm added that its "actual revenue is substantially lower" because the cost of using its V3 model is lower than the R1 model, only some services are monetized as web and app access remain free, and developers pay less during off-peak hours. Reporting by Eduardo Baptista; Editing by Daren Butler Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial Intelligence
[7]
DeepSeek Reveals Theoretical Margin on Its AI Models Is 545%
Chinese artificial intelligence phenomenon DeepSeek revealed some financial numbers on Saturday, saying its "theoretical" profit margin could be more than five times costs, peeling back a layer of the secrecy that shrouds business models in the AI industry. The 20-month-old startup that rattled Silicon Valley with its innovative and inexpensive approach to building AI models, said on X its V3 and R1 models' cost of inferencing to sales during a 24-hour-period on the last day of February put profit margins at 545%.
[8]
DeepSeek Reports 545% Daily Profit Despite Free AI Services
Chinese AI startup DeepSeek has reported a theoretical daily profit margin of 545% for its inference services, despite limitations in monetisation and discounted pricing structures. The company shared these details in a recent GitHub post, outlining the operational costs and revenue potential of its DeepSeek-V3 and R1 models. Based on DeepSeek-R1's pricing model -- charging $0.14 per million input tokens for cache hits, $0.55 per million for cache misses, and $2.19 per million output tokens -- the theoretical revenue generated daily is $562,027. However, the company acknowledged that actual earnings were significantly lower due to lower pricing for DeepSeek-V3, free access to web and app services, and automatic nighttime discounts. "Our pricing strategy prioritises accessibility and long-term adoption over immediate revenue maximisation," DeepSeek said. According to the company, DeepSeeks inference services run on NVIDIA H800 GPUs, with matrix multiplications and dispatch transmissions using the FP8 format, while core MLA computations and combine transmissions operate in BF16. The company scales its GPU usage based on demand, deploying all nodes during peak hours and reducing them at night to allocate resources for research and training. The GitHub post revealed that over a 24-hour period from February 27, 2025, to 12:00 PM on February 28, 2025, 12:00 PM, DeepSeek recorded peak node occupancy at 278, with an average of 226.75 nodes in operation. With each node containing eight H800 GPUs and an estimated leasing cost of $2 per GPU per hour, the total daily expenditure reached $87,072. The above revelation could affect the US stock market. The launch of DeepSeek's latest model, R1, which the company claims was trained on a $6 million budget, triggered a sharp market reaction. NVIDIA's stock tumbled 17%, wiping out nearly $600 billion in value, driven by concerns over the model's efficiency. However, NVIDIA chief Jensen Huang, during the recent earnings call, said the company's inference demand is accelerating, fuelled by test-time scaling and new reasoning models. "Models like OpenAI's, Grok 3, and DeepSeek R1 are reasoning models that apply inference-time scaling. Reasoning models can consume 100 times more compute," he said. "DeepSeek-R1 has ignited global enthusiasm. It's an excellent innovation. But even more importantly, it has open-sourced a world-class reasoning AI model," Huang said. According to a recent report, DeepSeek plans to release its next reasoning model, the DeepSeek R2, 'as early as possible.' The company initially planned to release it in early May but is now considering an earlier timeline. The model is said to produce 'better coding' and reason in languages beyond English.
[9]
How DeepSeek Achieved a 500% Profit Margin in the AI Industry
DeepSeek has captured the attention of the AI industry with its extraordinary 500% profit margin, a feat that has set it apart as a leader in efficiency and innovation. This achievement stems from a combination of innovative technology, strategic resource management, and a forward-thinking business model. By optimizing their inference systems for the R1 and V3 models, DeepSeek has redefined operational efficiency. But what makes their approach so effective? The answer lies in their innovative use of GPU technology, dynamic load balancing, and a pricing strategy designed to maximize both profitability and accessibility, explained by Prompt Engineering below. Through these methods, DeepSeek has not only maintained competitive pricing but also demonstrated how advanced system design can drive financial success. Their model offers a blueprint for balancing cost, scalability, and performance in the AI sector, making them a standout example of what's possible when innovation meets strategic execution. At the core of DeepSeek's operational success is its efficient use of H100 GPUs, which are optimized to handle both training and inference tasks. By employing 8-bit floating-point precision, the company reduces computational overhead while making sure model accuracy. This approach allows GPUs to operate at peak efficiency, even during periods of high demand, allowing faster processing and lower energy consumption. What truly sets DeepSeek apart is its 24/7 utilization strategy. During nighttime hours, when inference demand naturally declines, the company redirects its GPU resources toward research and training tasks. This dual-purpose use of computational power ensures that no resources are wasted, maximizing both productivity and cost efficiency. By using this continuous utilization model, DeepSeek has created a system that not only meets operational demands but also drives innovation and profitability. Dynamic load balancing is another cornerstone of DeepSeek's success. By intelligently distributing workloads across 278 nodes during peak hours, the company ensures that every GPU is fully used. This strategy accelerates processing speeds while simultaneously reducing energy consumption, leading to significant cost savings. During off-peak hours, the system automatically adjusts to lower demand, reallocating resources as needed. This adaptability allows DeepSeek to maintain operational efficiency regardless of workload fluctuations. The result is a highly flexible infrastructure that minimizes waste and maximizes profitability. By combining advanced load balancing with efficient GPU utilization, DeepSeek has created a system that is both scalable and cost-effective. Below are more guides on DeepSeek from our extensive range of articles. DeepSeek's adoption of 8-bit floating-point precision represents a significant innovation in computational efficiency. This method simplifies the complexity of inference tasks, allowing faster token processing without compromising performance. For a company that processes 600 billion input tokens and generates 168 billion output tokens daily, this level of precision is essential for maintaining scalability and efficiency. By reducing the computational burden, DeepSeek not only speeds up operations but also lowers energy consumption. This dual benefit directly contributes to the company's impressive profit margins. The use of 8-bit precision highlights DeepSeek's commitment to using advanced technology to achieve both operational and financial efficiency. DeepSeek has further solidified its position as an industry leader through its open-sourcing initiative, releasing five repositories to the public. This move fosters collaboration within the AI community while showcasing the company's technological expertise. By sharing its advancements, DeepSeek positions itself as a transparent and forward-thinking organization, attracting top talent and potential partnerships. This strategy not only enhances the company's credibility but also drives innovation by encouraging external contributions. The open-sourcing initiative reflects DeepSeek's commitment to advancing the AI field as a whole, reinforcing its role as a leader in both technology and community engagement. DeepSeek's pricing model is as innovative as its technology. By offering discounts of 50% to 75% during off-peak hours, the company incentivizes usage when demand is lower. This approach ensures efficient GPU utilization around the clock while maximizing revenue. Interestingly, DeepSeek monetizes only its API usage, while access through its web and app platforms remains free. This selective monetization strategy reflects the company's confidence in generating substantial revenue from a focused segment, while also broadening its user base. By aligning its pricing model with its operational strengths, DeepSeek has created a system that is both profitable and accessible. Operating 278 nodes during peak hours comes with significant costs, estimated at approximately $887,000 per day. However, DeepSeek's revenue model offsets these expenses effectively. With a theoretical daily revenue of $500,000, the company achieves a remarkable 500% profit margin. When accounting for GPU ownership and reduced operational costs, the actual profit margin is closer to 85%, a figure that still underscores the company's financial efficiency. This level of profitability highlights the effectiveness of DeepSeek's system design and resource management, proving that high margins are achievable even in a resource-intensive industry. DeepSeek's approach is reshaping the AI industry, particularly in the competitive landscape of foundation model APIs. By exposing inefficiencies in traditional pricing models, the company challenges competitors to rethink their strategies. Its ability to scale operations while maintaining profitability sets a new benchmark for cost-effective AI systems. DeepSeek's success signals a shift in the industry, where efficiency and innovation are becoming critical for staying competitive. For companies looking to thrive in this evolving market, DeepSeek's model offers valuable insights into balancing cost, scalability, and performance.
[10]
China's DeepSeek Reports 545% Theoretical Daily Profit Margin, Even Though ChatGPT-Maker OpenAI Is Yet To Turn A Profit - Microsoft (NASDAQ:MSFT), NVIDIA (NASDAQ:NVDA)
On Saturday, Chinese AI startup DeepSeek revealed cost and revenue estimates for its popular V3 and R1 models, showing a theoretical cost-profit ratio of up to 545% per day. This revelation further challenged the economics of U.S. AI companies like OpenAI that spend billions on cutting-edge chips. What Happened: In a GitHub post, DeepSeek detailed that its AI inference costs -- associated with running trained models -- amount to $87,072 per day, assuming a rental price of $2 per hour for each Nvidia Corp.'s NVDA H800 chip, reported Reuters. In contrast, it estimates daily revenue of $562,027, equating to an annualized revenue of over $200 million. However, DeepSeek acknowledged that "actual revenue is substantially lower" due to factors such as free web and app access, lower fees for off-peak usage, and the fact that some services remain unmonetized. See Also: Google And China's Honor Deepen AI Partnership As Huawei Spin-Off Takes On Apple And Samsung With $10 Billion Investment Why It's Important: The latest development follows earlier disclosures that DeepSeek spent under $6 million on chips to train its models -- far less than U.S. rivals. AI stocks outside China tumbled in January as investors reassessed the capital requirements for AI development. Nvidia's market value plunged by a record-breaking $593 billion in a single day -- the largest loss ever for a Wall Street company. DeepSeek's strategic pricing, including off-peak hours discounts last week, has also disrupted the AI industry. Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. Meanwhile, OpenAI CEO Sam Altman has also revealed that his company is incurring losses on its ChatGPT Pro plan, priced at $200 per month. At the time, it was also reported that despite raising $20 billion since its inception, OpenAI has yet to turn a profit. Last year, it was reported that OpenAI, backed by Microsoft Corp. MSFT, intends to increase the price of ChatGPT over the next five years. Check out more of Benzinga's Consumer Tech coverage by following this link. Read Next: MrBeast Lost Millions On Amazon's 'Beast Games' -- Now He's Looking To Raise Hundreds Of Millions To Take His Brand To The Next Level Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors. Photo courtesy: Shutterstock MSFTMicrosoft Corp$396.771.08%OverviewNVDANVIDIA Corp$125.004.04%Market News and Data brought to you by Benzinga APIs
Share
Share
Copy Link
Chinese AI startup DeepSeek has disclosed a theoretical 545% profit margin for its AI models, sparking discussions about AI profitability and challenging industry norms. However, the company cautions that actual revenues are substantially lower.
Chinese artificial intelligence startup DeepSeek has sent shockwaves through the global AI industry by revealing a staggering "theoretical" profit margin of 545% for its AI models. This disclosure, made over the weekend, marks a rare instance of transparency in the typically secretive world of AI financials 1.
DeepSeek's claim is based on a 24-hour period of inferencing costs compared to user requests for its latest models, V3 and R1. The company stated that its potential daily revenue could reach $562,027, while the cost of leasing the necessary GPUs would be just $87,072 3. This translates to an annual revenue potential of over $200 million 4.
However, DeepSeek has been quick to add disclaimers to these eye-popping figures. The company acknowledges that its actual revenues are "substantially lower" due to several factors:
This revelation comes at a crucial time when the profitability of AI startups and their models is a hot topic among technology investors. Companies like OpenAI and Anthropic are experimenting with various revenue models, from subscriptions to usage-based charging and licensing fees 2.
DeepSeek's disclosure has the potential to further rattle AI stocks outside China, which already experienced a significant drop in January when the company's chatbots surged in popularity worldwide 5.
Part of the industry's reaction stems from DeepSeek's claims of developing high-performance AI models at a fraction of the cost of its U.S. counterparts. The company stated it spent less than $6 million on chips to train its models, significantly less than what companies like OpenAI have reportedly invested 5.
DeepSeek attributes its efficiency to several technological innovations:
While DeepSeek's numbers are largely theoretical, they have ignited a broader debate about AI costs and potential profitability. The company's approach challenges the notion that cutting-edge AI development requires massive investments in the most powerful chips 5.
As the AI industry continues to evolve rapidly, DeepSeek's revelations may prompt other companies to reassess their strategies and potentially increase transparency around their own financial models and technological approaches.
Reference
[1]
[2]
[4]
Chinese AI startup DeepSeek has disrupted the AI industry with its cost-effective and powerful AI models, causing significant market reactions and challenging the dominance of major U.S. tech companies.
14 Sources
14 Sources
DeepSeek, a Chinese AI startup, has garnered significant attention after its breakthrough in AI models. However, its founder, Liang Wenfeng, is reportedly turning down investment offers to maintain control and focus on research rather than immediate monetization.
5 Sources
5 Sources
Chinese startup DeepSeek's efficient AI model sparks market volatility, causing a shift from hardware to software stocks and raising questions about the future of AI infrastructure investments.
6 Sources
6 Sources
DeepSeek's open-source R1 model challenges OpenAI's o1 with comparable performance at a fraction of the cost, potentially revolutionizing AI accessibility and development.
6 Sources
6 Sources
Chinese AI startup DeepSeek has shaken the tech industry with its cost-effective and powerful AI model, causing market turmoil and raising questions about the future of AI development and investment.
49 Sources
49 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved