MiniMax-M2: The New Open-Source AI Powerhouse Challenging Proprietary Models

Reviewed byNidhi Govil

2 Sources

Share

MiniMax-M2, a newly released open-source AI model, is making waves in the AI community with its impressive performance in coding and agentic tasks, rivaling proprietary models while offering cost-efficiency and enterprise-friendly licensing.

News article

A New Contender in the AI Arena

MiniMax, a Chinese startup, has released its latest large language model (LLM), MiniMax-M2, which is quickly gaining recognition as a formidable player in the open-source AI landscape. This model is particularly noteworthy for its exceptional performance in agentic tool use, a capability increasingly valued by enterprises for its ability to utilize external software and applications with minimal human guidance

1

.

Impressive Benchmark Performance

According to independent evaluations by Artificial Analysis, MiniMax-M2 has claimed the top spot among open-weight systems worldwide on the Intelligence Index, a comprehensive measure of reasoning, coding, and task-execution performance

1

. The model's performance in agentic benchmarks is particularly striking:

  • τ²-Bench: 77.2
  • BrowseComp: 44.0
  • FinSearchComp-global: 65.5

These scores place MiniMax-M2 at or near the level of top proprietary systems like GPT-5 (thinking) and Claude Sonnet 4.5, making it the highest-performing open model yet released for real-world agentic and tool-calling tasks

1

.

Efficient Architecture and Enterprise Appeal

MiniMax-M2 is built on an efficient Mixture-of-Experts (MoE) architecture, which allows it to deliver high-end capabilities while remaining practical for enterprise deployment. The model boasts 230 billion total parameters but activates only 10 billion during inference, significantly reducing latency and compute requirements

1

.

This design enables enterprises to operate advanced reasoning and automation workloads on fewer GPUs, achieving near-state-of-the-art results without the infrastructure demands or licensing costs associated with proprietary frontier systems. According to Artificial Analysis, the model can be served efficiently on as few as four NVIDIA H100 GPUs at FP8 precision

1

.

Outperforming Proprietary Models

MiniMax-M2's performance extends beyond raw intelligence scores. It leads or closely trails top proprietary systems across benchmarks for end-to-end coding, reasoning, and agentic tool use. Notable achievements include:

  • SWE-bench Verified: 69.4 (close to GPT-5's 74.9)
  • ArtifactsBench: 66.8 (above Claude Sonnet 4.5 and DeepSeek-V3.2)
  • GAIA (text only): 75.7 (surpassing DeepSeek-V3.2)

In the Artificial Analysis Intelligence Index v3.0, MiniMax-M2 achieved a score of 61, ranking eighth overall and outperforming Anthropic's Claude Opus 4.1, which scored 59 .

Open-Source Availability and Enterprise-Friendly Licensing

One of the most significant aspects of MiniMax-M2 is its availability under the permissive MIT License. This allows developers to freely take, deploy, retrain, and use the model as they see fit, even for commercial purposes

1

. The model is accessible through various platforms, including Hugging Face, GitHub, ModelScope, and MiniMax's API, which supports OpenAI and Anthropic API standards .

MiniMax claims that the model delivers results at roughly 8% of the cost of Claude Sonnet and runs nearly twice as fast, making it an attractive option for cost-conscious enterprises seeking advanced AI capabilities .

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo