DeepSeek Launches V3.2 Models to Challenge OpenAI and Google's AI Dominance

Reviewed byNidhi Govil

5 Sources

Share

Chinese AI company DeepSeek releases two new reasoning models, V3.2 and V3.2-Speciale, that match or exceed GPT-5 and Gemini 3 Pro performance in benchmarks while remaining open-source and cost-effective.

DeepSeek Challenges AI Giants with New Reasoning Models

Chinese AI company DeepSeek has released two powerful new reasoning models, DeepSeek-V3.2 and DeepSeek-V3.2-Speciale, that the Hangzhou-based startup claims match or exceed the performance of OpenAI's GPT-5 and Google's Gemini 3 Pro

1

2

. The release represents a significant development in the global AI competition, demonstrating that Chinese companies can produce frontier AI systems despite U.S. export controls on advanced semiconductor technology.

Source: Seeking Alpha

Source: Seeking Alpha

Model Specifications and Availability

DeepSeek-V3.2 serves as the official successor to the experimental V3.2-Exp model released in September and is now available across the company's app, web interface, and API

3

. The standard V3.2 model is positioned as "your daily driver at GPT-5 level performance," designed to balance inference efficiency with long-context capabilities.

The more powerful V3.2-Speciale variant is currently available only through a temporary API endpoint until December 15, 2025

3

. Both models feature 685 billion parameters and support context windows of 128,000 tokens, making them suitable for analyzing lengthy documents and complex codebases

2

.

Breakthrough Performance in Competitive Benchmarks

The V3.2-Speciale model has achieved remarkable results in prestigious international competitions, earning gold-medal status across multiple domains. The model scored 35 out of 42 points on the 2025 International Mathematical Olympiad, 492 out of 600 points at the International Olympiad in Informatics (ranking 10th overall), and solved 10 of 12 problems at the ICPC World Finals, placing second

2

4

.

In direct benchmark comparisons, DeepSeek-V3.2-Speciale achieved a 96.0% pass rate on AIME 2025, surpassing GPT-5-High's 94.6% and matching Gemini-3.0-Pro's 95.0%

2

. On the Harvard-MIT Mathematics Tournament, the model scored 99.2%, exceeding Gemini's 97.5% performance.

Technical Innovation: DeepSeek Sparse Attention

At the core of these models lies DeepSeek Sparse Attention (DSA), a novel architectural innovation that dramatically reduces computational costs for processing long sequences

2

. Traditional attention mechanisms scale poorly with input length, typically requiring four times the computation for documents twice as long. DSA addresses this limitation through a "lightning indexer" that identifies only the most relevant context portions for each query.

This breakthrough reduces inference costs by approximately 70% compared to previous models. Processing 128,000 tokens now costs roughly $0.70 per million tokens for decoding, down from $2.40 for the previous V3.1-Terminus model

2

.

Source: VentureBeat

Source: VentureBeat

Advanced Agent Capabilities and Tool Integration

DeepSeek-V3.2 introduces "thinking in tool-use," allowing the model to maintain reasoning chains while simultaneously executing code, searching the web, and manipulating files

2

3

. This capability addresses a significant limitation of previous AI models that would lose their reasoning context each time they accessed external tools.

The models are supported by a new synthetic dataset spanning more than 1,800 environments and 85,000 complex instructions, representing an expansion of DeepSeek's agent-training approach

3

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo