2 Sources
2 Sources
[1]
Secrets of DeepSeek AI model revealed in landmark paper
The success of DeepSeek's powerful artificial intelligence (AI) model R1 -- that made the US stock market plummet when it was released in January -- did not hinge on being trained on the output of its rivals, researchers at the Chinese firm have said. The statement came in documents released alongside a peer-reviewed version of the R1 model, published today in Nature. R1 is designed to excel at 'reasoning' tasks such as mathematics and coding, and is a cheaper rival to tools developed by US technology firms. As an 'open weight' model, it is available for anyone to download and is the most popular such model on the AI community platform Hugging Face to date, having been downloaded 10.9 million times. The paper updates a preprint released in January, which describes how DeepSeek augmented a standard large language model (LLM) to tackle reasoning tasks. Its supplementary material reveals for the first time how much R1 cost to train: the equivalent of just US$294,000. This comes on top of the $6 million or so that the company, based in Hangzhou, spent to make the base LLM that R1 is built on, but the total amount is still substantially less than the tens of millions of dollars that rival models are thought to have cost. DeepSeek says R1 was trained mainly on Nvidia's H800 chips, which in 2023 became forbidden from being sold to China under US export controls. R1 is thought to be the first major LLM to undergo the peer-review process. "This is a very welcome precedent," says Lewis Tunstall, a machine-learning engineer at Hugging Face who reviewed the Nature paper. "If we don't have this norm of sharing a large part of this process publicly, it becomes very hard to evaluate whether these systems pose risks or not." In response to peer-review comments, the DeepSeek team reduced anthropomorphizing in its descriptions and added clarifications of technical details, including the kinds of data the model was trained on, and its safety. "Going through a rigorous peer-review process certainly helps verify the validity and usefulness of the model," says Huan Sun, an AI researcher at Ohio State University in Columbus. "Other firms should do the same." DeepSeek's major innovation was to use an automated kind of the trial-and-error approach known as pure reinforcement learning to create R1. The process rewarded the model for reaching correct answers, rather than teaching it to follow human-selected reasoning examples. The company says that this is how its model learnt its own reasoning-like strategies, such as how to verify its workings without following human-prescribed tactics. To boost efficiency, the model also scored its own attempts using estimates, rather than employing a separate algorithm to do so, a technique known as group relative policy optimization. The model has been "quite influential" among AI researchers, says Sun. "Almost all work in 2025 so far that conducts reinforcement learning in LLMs might have been inspired by R1 one way or another." Media reports in January suggested that researchers at OpenAI, the company, based in San Francisco, California, that created ChatGPT and the 'o' series of reasoning models, thought DeepSeek had used outputs from OpenAI models to train R1, a method that could have accelerated a model's abilities while using fewer resources. DeepSeek has not published its training data as part of the paper. But, in exchanges with referees, the firm's researchers stated that R1 did not learn by copying reasoning examples that were generated by OpenAI models. However, they acknowledged that, like most other LLMs, R1's base model was trained on the web, so it will have ingested any AI-generated content already on the Internet. This rebuttal is "as convincing as what we could see in any publication", says Sun. Tunstall adds that although he can't be 100% sure R1 wasn't trained on OpenAI examples, replication attempts by other labs suggest that DeepSeek's recipe for reasoning is probably good enough to not need to do this. "I think the evidence now is fairly clear that you can get very high performance just using pure reinforcement learning," he says. For researchers, R1 is still very competitive, Sun says. In a challenge to complete scientific tasks such as analyzing and visualizing data, known as ScienceAgentBench, Sun and colleagues found that although R1 was not first for accuracy, it was one of the best models in terms of balancing ability with cost. Other researchers are now trying to apply the methods used to create R1 to improve the reasoning-like abilities of existing LLMs, as well as extending them to domains beyond mathematics and coding, says Tunstall. In that way, he adds, R1 has "kick-started a revolution".
[2]
Secrets of Chinese AI Model DeepSeek Revealed in Landmark Paper
The first peer-reviewed study of the DeepSeek AI model shows how a Chinese start-up firm made the market-shaking LLM for $300,000 The success of DeepSeek's powerful artificial intelligence (AI) model R1 -- that made the US stock market plummet when it was released in January -- did not hinge on being trained on the output of its rivals, researchers at the Chinese firm have said. The statement came in documents released alongside a peer-reviewed version of the R1 model, published today in Nature. R1 is designed to excel at 'reasoning' tasks such as mathematics and coding, and is a cheaper rival to tools developed by US technology firms. As an 'open weight' model, it is available for anyone to download and is the most popular such model on the AI community platform Hugging Face to date, having been downloaded 10.9 million times. The paper updates a preprint released in January, which describes how DeepSeek augmented a standard large language model (LLM) to tackle reasoning tasks. Its supplementary material reveals for the first time how much R1 cost to train: the equivalent of just US$294,000. This comes on top of the $6 million or so that the company, based in Hangzhou, spent to make the base LLM that R1 is built on, but the total amount is still substantially less than the tens of millions of dollars that rival models are thought to have cost. DeepSeek says R1 was trained mainly on Nvidia's H800 chips, which in 2023 became forbidden from being sold to China under US export controls. If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. R1 is thought to be the first major LLM to undergo the peer-review process. "This is a very welcome precedent," says Lewis Tunstall, a machine-learning engineer at Hugging Face who reviewed the Nature paper. "If we don't have this norm of sharing a large part of this process publicly, it becomes very hard to evaluate whether these systems pose risks or not." In response to peer-review comments, the DeepSeek team reduced anthropomorphizing in its descriptions and added clarifications of technical details, including the kinds of data the model was trained on, and its safety. "Going through a rigorous peer-review process certainly helps verify the validity and usefulness of the model," says Huan Sun, an AI researcher at Ohio State University in Columbus. "Other firms should do the same." DeepSeek's major innovation was to use an automated kind of the trial-and-error approach known as pure reinforcement learning to create R1. The process rewarded the model for reaching correct answers, rather than teaching it to follow human-selected reasoning examples. The company says that this is how its model learnt its own reasoning-like strategies, such as how to verify its workings without following human-prescribed tactics. To boost efficiency, the model also scored its own attempts using estimates, rather than employing a separate algorithm to do so, a technique known as group relative policy optimization. The model has been "quite influential" among AI researchers, says Sun. "Almost all work in 2025 so far that conducts reinforcement learning in LLMs might have been inspired by R1 one way or another." Media reports in January suggested that researchers at OpenAI, the company, based in San Francisco, California, that created ChatGPT and the 'o' series of reasoning models, thought DeepSeek had used outputs from OpenAI models to train R1, a method that could have accelerated a model's abilities while using fewer resources. DeepSeek has not published its training data as part of the paper. But, in exchanges with referees, the firm's researchers stated that R1 did not learn by copying reasoning examples that were generated by OpenAI models. However, they acknowledged that, like most other LLMs, R1's base model was trained on the web, so it will have ingested any AI-generated content already on the Internet. This rebuttal is "as convincing as what we could see in any publication", says Sun. Tunstall adds that although he can't be 100% sure R1 wasn't trained on OpenAI examples, replication attempts by other labs suggest that DeepSeek's recipe for reasoning is probably good enough to not need to do this. "I think the evidence now is fairly clear that you can get very high performance just using pure reinforcement learning," he says. For researchers, R1 is still very competitive, Sun says. In a challenge to complete scientific tasks such as analyzing and visualizing data, known as ScienceAgentBench, Sun and colleagues found that although R1 was not first for accuracy, it was one of the best models in terms of balancing ability with cost. Other researchers are now trying to apply the methods used to create R1 to improve the reasoning-like abilities of existing LLMs, as well as extending them to domains beyond mathematics and coding, says Tunstall. In that way, he adds, R1 has "kick-started a revolution."
Share
Share
Copy Link
Chinese startup DeepSeek's AI model R1, known for its advanced reasoning capabilities, has been detailed in a peer-reviewed paper published in Nature. The study reveals the model's innovative training approach and surprisingly low development costs.
DeepSeek, a Chinese startup, has made waves in the artificial intelligence community with its powerful AI model R1. The company recently published a peer-reviewed paper in Nature, revealing the secrets behind their groundbreaking technology
1
2
.R1's success lies in its unique training methodology. DeepSeek employed an automated trial-and-error approach known as pure reinforcement learning, which rewarded the model for reaching correct answers rather than following human-selected reasoning examples
1
. This innovative technique allowed R1 to develop its own reasoning-like strategies, including self-verification methods, without relying on human-prescribed tactics.One of the most surprising revelations in the paper is the remarkably low cost of developing R1. The model's training expenses amounted to just $294,000, with an additional $6 million spent on creating the base large language model (LLM)
1
. This total is substantially less than the tens of millions of dollars typically associated with rival models, demonstrating DeepSeek's efficiency in AI development.R1 is designed to excel at reasoning tasks such as mathematics and coding. As an 'open weight' model, it is freely available for download and has gained significant popularity on the AI community platform Hugging Face, with 10.9 million downloads to date
2
. The model was primarily trained on Nvidia's H800 chips, which became subject to US export controls to China in 20231
.The publication of R1's details in a peer-reviewed journal marks a significant milestone in AI transparency. Lewis Tunstall, a machine-learning engineer at Hugging Face, praised this move, stating, "This is a very welcome precedent. If we don't have this norm of sharing a large part of this process publicly, it becomes very hard to evaluate whether these systems pose risks or not"
1
.Related Stories
DeepSeek has addressed speculation about R1's training data, confirming that the model did not learn by copying reasoning examples generated by other AI models, such as those from OpenAI
2
. However, they acknowledged that R1's base model was trained on web data, which may have included AI-generated content already present on the internet.The success of R1 has sparked a new wave of research in the AI community. Other researchers are now exploring ways to apply DeepSeek's methods to improve the reasoning abilities of existing LLMs and extend them to new domains beyond mathematics and coding
1
. This development represents a significant step forward in the field of AI, potentially leading to more efficient and capable models in the future.Summarized by
Navi
[2]