DeepSeek Unveils Enhanced V3 AI Model with MIT License, Boosting Accessibility and Performance

2 Sources

Share

DeepSeek has released an improved version of its DeepSeek-V3 large language model under the MIT License, offering better performance in programming and reasoning tasks while increasing its accessibility for commercial use.

News article

DeepSeek Releases Improved V3 Model

DeepSeek, a Chinese artificial intelligence lab, has quietly rolled out an updated version of its DeepSeek-V3 large language model (LLM) with significant improvements and a new open-source license. The release, first reported by software developer Simon Willison, marks a notable advancement in the accessibility and capabilities of open-source AI models

1

.

Key Enhancements and Licensing

The latest iteration of DeepSeek-V3, dubbed V3-0324, introduces several notable improvements:

  1. MIT License Adoption: The model has transitioned from a custom open-source license to the widely-used MIT License, allowing developers to use and modify the model in commercial projects with minimal restrictions

    1

    .

  2. Improved Performance: Early benchmarks suggest that the new version outperforms its predecessor in programming tasks. A reported benchmark test showed the model achieving a score of about 60% in generating Python and Bash code, several percentage points higher than the original DeepSeek-V3

    1

    .

  3. Hardware Efficiency: Despite its 671 billion parameters, DeepSeek-V3 only activates about 37 billion when responding to prompts, making it more efficient than traditional LLMs

    1

    .

Technical Capabilities and Comparisons

While DeepSeek-V3 is a general-purpose model, it has shown promising capabilities in specific areas:

  1. Reasoning and Math Skills: The model can solve some math problems and generate code, although it's not specifically optimized for reasoning like its counterpart, DeepSeek-R1

    1

    .

  2. Competitive Performance: Early testing indicates that the updated V3 model performs better than comparable models like ChatGPT's o3-mini, according to AI entrepreneur Paul Gauthier

    2

    .

  3. Hardware Compatibility: Awni Hannun, a research scientist at Apple Inc.'s machine learning research group, successfully ran the new DeepSeek-V3 on a high-end Mac Studio, generating output at about 20 tokens per second

    1

    .

Impact on the AI Landscape

The release of the improved DeepSeek-V3 model has broader implications for the AI industry:

  1. Open-Source Advancement: By releasing under the MIT License, DeepSeek is contributing to the democratization of AI technology, potentially accelerating innovation in the field

    1

    2

    .

  2. Chinese AI Capabilities: The update follows the success of DeepSeek's R1 model, which had previously demonstrated China's growing prowess in AI development

    2

    .

  3. Industry Competition: DeepSeek's advancements have spurred increased activity among Chinese tech giants, with companies like Baidu, Bytedance, Alibaba, and Tencent releasing new AI models to capitalize on the momentum

    2

    .

Training and Efficiency

The original DeepSeek-V3 model was trained on a dataset of 14.8 trillion tokens, using approximately 2.8 million graphics card hours – significantly less than what is typically required for frontier LLMs. To enhance output quality, DeepSeek engineers fine-tuned the model using prompt responses from DeepSeek-R1

1

.

As the AI landscape continues to evolve rapidly, DeepSeek's latest release represents a significant step forward in making powerful language models more accessible and efficient for developers and researchers worldwide.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo