Cursor admits Composer 2 was built on Chinese AI model, exposing Western open-source gaps

Reviewed byNidhi Govil

10 Sources

Share

AI coding startup Cursor launched Composer 2 as frontier-level coding intelligence but failed to disclose it was built on Moonshot AI's Kimi 2.5 model. A developer exposed the connection within hours by intercepting API traffic. The incident highlights why Western companies increasingly turn to Chinese open models when competing with larger AI firms.

Cursor Launches Composer 2 Without Disclosing Foundation

Cursor, the AI coding startup valued at $29.3 billion, launched Composer 2 this week, promoting it as offering frontier-level coding intelligence for AI-powered software development

1

. The San Francisco-based company, which now serves more than 1 million daily users including major clients like Stripe and Figma, positioned the new AI coding model as evidence of its capability to compete with larger AI firms like Anthropic and OpenAI

2

. However, what the announcement omitted was crucial: Composer 2 was built on top of Kimi K2.5, an open-source model from Moonshot AI, a Chinese startup backed by Alibaba, Tencent and HongShan, formerly known as Sequoia China

3

.

Source: SiliconANGLE

Source: SiliconANGLE

Developer Exposes Kimi Connection Through API Traffic

A developer named Fynn figured out the connection within hours of the launch. By setting up a local debug proxy server and routing Cursor's API traffic through it, Fynn intercepted the outbound request and found the model ID in plain sight: accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast

3

. "So composer 2 is just Kimi K2.5 with RL," Fynn wrote on X. "At least rename the model ID." The post accumulated 2.6 million views, forcing Cursor to acknowledge what it had initially concealed

1

.

Cursor's vice president of developer education Lee Robinson soon confirmed the connection, stating "Yep, Composer 2 started from an open-source base!" He emphasized that only approximately one-fourth of the compute spent on the final model came from the base, with the rest from their training through reinforcement learning

1

. Co-founder Aman Sanger, who leads the research team, acknowledged the oversight: "It was a miss to not mention the Kimi base in our blog from the start. We'll fix that for the next model"

1

.

Source: ET

Source: ET

Why Cursor Chose Built on Moonshot AI Over Western Alternatives

The decision to build on Kimi K2.5 reveals significant gaps in Western open-source AI options. Kimi K2.5 is a 1 trillion parameter Mixture-of-Experts architecture with 32 billion active parameters, a 256,000-token context window, and native image and video support

3

. Released under a modified MIT license permitting commercial use through an authorized commercial partnership with Fireworks AI, it proved competitive on agentic benchmarks

1

.

When AI coding startups need strong open models for continued pretraining and reinforcement learning to handle agentic coding tasks, Western options have been surprisingly limited. Meta's Llama 4 Behemoth has been indefinitely delayed with no public release date as of March 2026. Google's Gemma 3 family topped out at 27 billion parameters, insufficient for frontier-class coding agents. While OpenAI released the gpt-oss family in August 2025, including a 120-billion parameter variant, it activates only 5.1 billion parameters per token compared to Kimi's 32 billion active parameters

3

. For programming-optimized models handling complex, multi-step autonomous programming tasks across a massive context window, this intelligence density matters significantly.

Composer 2 Performance and Pricing for Agentic Coding Tasks

Despite the controversy, Composer 2 delivers substantial improvements over its predecessor. The AI coding model achieved a score of 61.3 on CursorBench, 61.7 on Terminal-Bench 2.0, and 73.7 on SWE-bench Multilingual, compared to Composer 1.5's scores of 44.2, 47.9, and 65.9 respectively

4

. On Terminal-Bench 2.0, Composer 2 outperformed Claude Opus 4.6's score of 58.0, though it still trails GPT-5.4's leading score of 75.1

4

.

The standard edition costs $0.5 per million input tokens and $2.5 per million output tokens, representing an 86% cost reduction from Composer 1.5's $3.50 and $17.50 pricing

4

. Aman Sanger explained that training Composer 2 solely on coding-related data enabled the company to build a smaller, more cost-efficient model

2

. The model supports a 200,000-token context window and uses self-summarization techniques to compress information during long-horizon coding tasks .

Implications for Silicon Valley and the AI Arms Race

The revelation carries particular weight given the current geopolitical climate surrounding AI development. Building on a Chinese model feels especially fraught as Silicon Valley frames the AI arms race as an existential battle between the United States and China

1

. This comes after the industry's panic following DeepSeek's release of a competitive model early last year. For a company that raised a $2.3 billion round at a $29.3 billion valuation and reportedly exceeds $2 billion in annualized revenue, the lack of upfront disclosure raises questions about transparency in an industry increasingly shaped by national security concerns

1

.

The incident suggests that Western AI companies may continue reaching for Chinese foundation models until domestic alternatives match their capabilities for specialized applications. Cursor is now in talks to raise financing at a roughly $50 billion valuation, making its technology choices and disclosure practices increasingly significant for investors and enterprise customers evaluating long-term partnerships

2

. As the company integrates Composer 2 into its agentic AI coding environment with access to semantic code search, file operations, shell commands, and browser control, developers must weigh performance gains against the model's origins and what that signals about the state of Western open-source AI infrastructure

4

.

Source: Benzinga

Source: Benzinga

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo