Curated by THEOUTPOST
On Wed, 30 Apr, 4:07 PM UTC
5 Sources
[1]
DeepSeek upgrades its AI model for math problem solving | TechCrunch
Chinese AI lab DeepSeek has quietly updated Prover, its AI system that's designed to solve math-related proofs and theorems. According to South China Morning Post, DeepSeek uploaded the latest version of Prover, V2, to the AI dev platform Hugging Face late on Wednesday. It appears to be built on top of the startup's V3 model, which has 671 billion parameters and adopts a mixture-of-experts (MoE) architecture. Parameters roughly correspond to a model's problem-solving skills, while MoE breaks down tasks into subtasks and delegates them to smaller, specialized "expert" components. DeepSeek last updated Prover in August, describing it at the time as a custom model for formal theorem proving and mathematical reasoning. In February, Reuters reported that DeepSeek, which recently released an upgraded version of V3, a general-purpose model, and is expected to update its R1 "reasoning" model soon, was said to be considering raising outside funding for the first time.
[2]
China's DeepSeek launches new open-source AI after R1 took on OpenAI
DeepSeek has released Prover V2, an open-source AI model focused on math theorem verification. Chinese artificial intelligence development company DeepSeek has released a new open-weight large language model (LLM). DeepSeek uploaded its newest model, Prover V2, to the hosting service Hugging Face on April 30. The latest model, released under the permissive open-source MIT license, aims to tackle math proof verification. Prover V2 has 671 billion parameters, making it significantly larger than its predecessors, Prover V1 and Prover V1.5, which were released in August 2024. The paper accompanying the first version explained that the model was trained to translate math competition problems into formal logic using the Lean 4 programming language -- a tool widely used for proving theorems. The developers say Prover V2 compresses mathematical knowledge into a format that allows it to generate and verify proofs, potentially aiding research and education. Related: Here's why DeepSeek crashed your Bitcoin and crypto A model, also informally and incorrectly referred to as "weights" in the AI space, is the file or collection of files that allow one to locally execute an AI without relying on external servers. Still, it's worth pointing out that state-of-the-art LLMs require hardware that most people don't have access to. This is because those models tend to have a large parameter count, which results in large files that require a lot of RAM or VRAM (GPU memory) and processing power to run. The new Prover V2 model weighs approximately 650 gigabytes and is expected to run from RAM or VRAM. To get them down to this size, Prover V2 weights have been quantized down to 8-bit floating point precision, meaning that each parameter has been approximated to take half the space of the usual 16 bits, with a bit being a single digit in binary numbers. This effectively halves the model's bulk. Prover V1 is based on the seven-billion-parameter DeepSeekMath model and was fine-tuned on synthetic data. Synthetic data refers to data used for training AI models that was, in turn, also generated by AI models, with human-generated data usually seen as an increasingly scarce source of higher-quality data. Prover V1.5 reportedly improved on the previous version by optimizing both training and execution and achieving higher accuracy in benchmarks. So far, the improvements introduced by Prover V2 are unclear, as no research paper or other information has been published at the time of writing. The number of parameters in the Prover V2 weights suggests that it is likely to be based on the company's previous R1 model. When it was first released, R1 made waves in the AI space with its performance comparable to the then state-of-the-art OpenAI's o1 model. Related: South Korea suspends downloads of DeepSeek over user data concerns Publicly releasing the weights of LLMs is a controversial topic. On one side, it is a democratizing force that allows the public to access AI on their own terms without relying on private company infrastructure. On the other side, it means that the company cannot step in and prevent abuse of the model by enforcing certain limitations on dangerous user queries. The release of R1 in this manner raised security concerns, and some described it as China's "Sputnik moment." Open source proponents rejoiced that DeepSeek continued where Meta left off with the release of its LLaMA series of open-source AI models, proving that open AI is a serious contender for OpenAI's closed AI. The accessibility of those models also continues to improve. Now, even users without access to a supercomputer that costs more than the average home in much of the world can run LLMs locally. This is primarily thanks to two AI development techniques: model distillation and quantization. Distillation refers to training a compact "student" network to replicate the behavior of a larger "teacher" model, so you keep most of the performance while cutting parameters to make it accessible to less powerful hardware. Quantization consists of reducing the numeric precision of a model's weights and activations to shrink size and boost inference speed with only minor accuracy loss. An example is Prover V2's reduction from 16 to eight-bit floating point numbers, but further reductions are possible by halving bits further. Both of those techniques have consequences for model performance, but usually leave the model largely functional. DeepSeek's R1 was distilled into versions with retrained LLaMA and Qwen models ranging from 70 billion parameters to as low as 1.5 billion parameters. The smallest of those models can even reliably be run on some mobile devices.
[3]
DeepSeek sharpens its math AI with MoE-powered Prover upgrade
DeepSeek, a Chinese AI lab, has upgraded its AI model Prover, designed to solve math-related proofs and theorems, with the release of version V2 on AI development platform Hugging Face on Wednesday. The latest version appears to be built on top of DeepSeek's V3 model, which boasts 671 billion parameters and utilizes a mixture-of-experts (MoE) architecture. This architecture enables the model to break down complex tasks into subtasks and delegate them to specialized "expert" components. In the context of AI models, parameters are a rough measure of a model's problem-solving capabilities. DeepSeek last updated Prover in August, describing it as a custom model for formal theorem proving and mathematical reasoning. The upgrade comes as DeepSeek continues to expand its AI offerings. In February, Reuters reported that the company was considering raising outside funding for the first time. Recently, DeepSeek released an upgraded version of its general-purpose V3 model and is expected to update its R1 "reasoning" model soon.
[4]
DeepSeek's New Math AI Model Can Help Prove Formal Math Theorems
DeepSeek, the Hangzhou, China-based artificial intelligence (AI) firm, released an updated version of its Prover model on Wednesday. Dubbed DeepSeek-Prover-V2, it is a highly specialised model that focuses on proving formal mathematical theorems. The large language model (LLM) uses the Lean 4 programming language to check if the mathematical proofs are logically consistent by analysing each step independently. Similar to the Chinese firm's previous releases, the DeepSeek-Prover-V2 is an open-source model and can be downloaded from popular repositories such as GitHub and Hugging Face. The AI firm detailed the new model on its GitHub listing page. It is essentially a reasoning-focused model with a visible chain-of-thought (CoT), which functions in the domain of mathematics. It is built on and distilled from the DeepSeek-V3 AI model, which was released in December 2024. DeepSeek-Prover-V2 can be used in a variety of ways. It can solve high-school to college-level mathematical problems and find and fix errors in mathematical theorem proofs. It can also be used as a teaching aid and generate step-by-step explanations for proofs, and it can assist mathematicians and researchers in exploring new theorems and proving their validity. It is available in two model sizes -- a seven billion parameter size and a larger 671 billion parameter size. While the latter is trained on top of DeepSeek-V3-Base, the former is built upon DeepSeek-Prover-V1.5-Base and comes with a context length of up to 32,000 tokens. Coming to the pre-training processes, the researchers implemented a cold-start training system by prompting the base model to decompose complex problems. These problems served as a series of subgoals. Then, the proofs of resolved subgoals were added to the CoT and combined with the reasoning of the base model to create an initial cold start for reinforcement learning. Notably, apart from GitHub, the AI model can also be downloaded from DeepSeek's Hugging Face listing. The Prover-V2 model highlights how iterative changes to the training process of AI models can result in significantly improving their specialised capability. Similar to other open-source model releases, the details about the core architecture or the larger dataset are not known.
[5]
DeepSeek open-sources new AI model - SCMP By Investing.com
Investing.com -- In the ongoing race to advance generative artificial intelligence (AI) capabilities, Chinese start-up DeepSeek has quietly open-sourced a new specialist AI model, according to a report from the South China Mourning Post. The move came just a day after Alibaba (NYSE:BABA) launched the third generation of its Qwen family. The Hangzhou-based start-up uploaded its latest open-source Prover-V2 model to Hugging Face, the world's largest open-source AI community. This was done without making any announcements on its official social media channels. The move has increased anticipation for DeepSeek's upcoming R2 reasoning model. The Prover series by DeepSeek consists of domain-specific models designed to solve math-related problems. The company has not yet provided any details about the new model on its Hugging Face page. However, files uploaded suggest that it was built on top of DeepSeek's V3 model, which has 671 billion parameters and adopts a mixture-of-experts architecture for cost-efficient training and operation. The development of a math-focused model has led to speculation that DeepSeek will soon launch additional models. The company, however, did not respond to a request for comment on this matter.
Share
Share
Copy Link
Chinese AI startup DeepSeek quietly releases Prover V2, an advanced open-source AI model designed for mathematical theorem proving and reasoning, built on their powerful V3 architecture.
Chinese AI startup DeepSeek has quietly unveiled Prover V2, the latest iteration of its specialized AI system designed for mathematical problem-solving and theorem proving. The company uploaded the new model to the AI development platform Hugging Face on April 30, 2025, without any formal announcement 15.
Prover V2 is built on DeepSeek's V3 model, boasting an impressive 671 billion parameters and utilizing a mixture-of-experts (MoE) architecture 13. This architecture allows the model to break down complex tasks into subtasks and delegate them to specialized "expert" components, enhancing its problem-solving capabilities 1.
The new model is designed to translate mathematical problems into formal logic using the Lean 4 programming language, a tool widely used for proving theorems 2. It can solve high-school to college-level mathematical problems, find and fix errors in proofs, generate step-by-step explanations, and assist researchers in exploring new theorems 4.
DeepSeek has released Prover V2 as an open-source model under the permissive MIT license 2. It is available in two sizes:
The model has been quantized to 8-bit floating point precision, effectively halving its size to approximately 650 gigabytes 2.
The researchers implemented a cold-start training system by prompting the base model to decompose complex problems into subgoals. The proofs of resolved subgoals were then added to the chain-of-thought (CoT) and combined with the reasoning of the base model to create an initial cold start for reinforcement learning 4.
While specific improvements in Prover V2 are yet to be detailed, its predecessor, Prover V1.5, had already shown optimizations in both training and execution, achieving higher accuracy in benchmarks 2.
The release of Prover V2 highlights the ongoing competition in AI development, particularly in specialized domains like mathematics. It follows closely on the heels of Alibaba's launch of its third-generation Qwen family of AI models 5.
The open-sourcing of such advanced models has sparked discussions about democratizing AI access while raising concerns about potential misuse. However, it also demonstrates that open AI is becoming a serious contender to closed AI systems 2.
This release comes at a time when DeepSeek is gaining prominence in the AI field. The company recently updated its general-purpose V3 model and is expected to soon release an update to its R1 "reasoning" model 13. Reports suggest that DeepSeek may be considering raising outside funding for the first time, indicating potential for further growth and development 1.
As AI continues to advance rapidly, specialized models like Prover V2 showcase the potential for AI to make significant contributions in fields requiring complex reasoning and problem-solving skills, potentially revolutionizing mathematical research and education.
Reference
[2]
[4]
[5]
Chinese AI startup DeepSeek, in collaboration with Tsinghua University, introduces a novel approach to create self-improving AI models, potentially revolutionizing the field with more efficient and intelligent systems.
5 Sources
5 Sources
Chinese AI startup DeepSeek releases DeepSeek V3, an open-weight AI model with 671 billion parameters, outperforming leading open-source models and rivaling proprietary systems in various benchmarks.
7 Sources
7 Sources
DeepSeek, a Chinese AI company, has launched R1-Lite-Preview, an open-source reasoning model that reportedly outperforms OpenAI's o1 preview in key benchmarks. The model showcases advanced reasoning capabilities and transparency in problem-solving.
11 Sources
11 Sources
Chinese AI startup DeepSeek releases a major upgrade to its V3 language model, showcasing improved performance and efficiency. The open-source model challenges industry leaders with its ability to run on consumer hardware.
16 Sources
16 Sources
DeepSeek's open-source R1 model challenges OpenAI's o1 with comparable performance at a fraction of the cost, potentially revolutionizing AI accessibility and development.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved