6 Sources
6 Sources
[1]
AI Coding Startup Cursor Plans New Model to Rival Anthropic, OpenAI
Cursor co-founder Aman Sanger said Composer 2 was trained solely on coding-related data to build a smaller model that's meant to be less expensive to use. Cursor, a leading artificial intelligence startup for coding, is set to release a more efficient AI model for software development in a bid to keep pace with larger firms like Anthropic PBC and OpenAI. The company on Thursday plans to unveil Composer 2, which is meant to work as an AI agent that carries out lengthy coding tasks on a user's behalf. Anthropic and OpenAI have also introduced more powerful AI models that they say can take on increasingly complicated and time-consuming work writing software. San Francisco-based Cursor launched its first AI coding assistant in 2023 and quickly caught on with professional software developers, leading to a new style of programming known as vibe coding. The company now has more than 1 million daily users, including 50,000 businesses such as payment processing firm Stripe Inc. and creative software maker Figma Inc. Cursor has also been in talks to raise a new round of financing at a roughly $50 billion valuation, Bloomberg News reported this month. However, the company faces heated competition from OpenAI, Anthropic and a number of newer startups that offer AI coding assistants designed to field more complex tasks on behalf of the user. Cursor supports a wide range of models, including those from OpenAI and Anthropic, and counts the ChatGPT maker as an investor. Cursor co-founder Aman Sanger, who leads its research team, said the startup focused on training Composer 2 solely on coding-related data -- an effort that let it build a smaller model that's meant to be less expensive to use. Unlike other leading AI developers whose tools are used for a wide range of tasks, Cursor's model is designed purely for coding. "It won't help you do your taxes," Sanger said. "It won't be able to write poems."
[2]
Cursor's new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
Cursor, a San Francisco AI coding platform from startup Anysphere valued at $29.3 billion, has launched Composer 2, a new in-house coding model now available inside its agentic AI coding environment, and it offers drastically improved benchmarks from its prior in-house model. It's also launching and making Composer 2 Fast, a higher-priced but faster variant, the default experience for users. Here's the cost breakdown: That's a big drop from Cursor's predecessor in-house model, Composer 1.5, from February, which cost $3.50 per million input tokens and $17.50 per million output tokens; Composer 2 is about 86% cheaper on both counts. Composer 2 Fast is also roughly 57% cheaper than Composer 1.5. There's also discounts for "cache-read pricing," that is, sending some of the same tokens in a prompt to the model again, of $0.20 per million tokens for Composer 2 and $0.35 per million for Composer 2 Fast, versus $0.35 per million for Composer 1.5. It also matters that this appears to be a Cursor-native release, not a broadly distributed standalone model. In the company's announcement and model documentation, Composer 2 is described as available in Cursor, tuned for Cursor's agent workflow and integrated with the product's tool stack. The materials provided do not indicate separate availability through external model platforms or as a general-purpose API outside the Cursor environment. The deeper technical claim in this release is not merely that Composer 2 scores higher than Composer 1.5. It is that Cursor says the model is better suited to long-horizon agentic coding. In its blog, Cursor says the quality gains come from its first continued pretraining run, which gave it a stronger base for scaled reinforcement learning. From there, the company says it trained Composer 2 on long-horizon coding tasks and that the model can solve problems requiring hundreds of actions. That framing is important because it addresses one of the biggest unresolved issues in coding AI. Many models are good at isolated code generation. Far fewer remain reliable across a longer workflow that includes reading a repository, deciding what to change, editing multiple files, running commands, interpreting failures and continuing toward a goal. Cursor's documentation reinforces that this is the use case it cares about. It describes Composer 2 as an agentic model with a 200,000-token context window, tuned for tool use, file edits and terminal operations inside Cursor. It also notes training techniques such as self-summarization for long-running tasks. For developers already using Cursor as their main environment, that tighter tuning may matter more than a generic leaderboard claim. Cursor's published results show a clear improvement over prior Composer models. The company lists Composer 2 at 61.3 on CursorBench, 61.7 on Terminal-Bench 2.0, and 73.7 on SWE-bench Multilingual. That compares with Composer 1.5 at 44.2, 47.9 and 65.9, and Composer 1 at 38.0, 40.0 and 56.9. The release is more measured than some model launches because Cursor is not claiming universal leadership. On Terminal-Bench 2.0, which measures how well an AI agent performs tasks in command line terminal-style interfaces, GPT-5.4 still leads at 75.1, while Composer 2 scores 61.7, ahead of Opus 4.6 at 58.0, Opus 4.5 at 52.1 and Composer 1.5 at 47.9. That makes Cursor's pitch more pragmatic and arguably more useful for buyers. The company is not saying Composer 2 is the single best model at everything. It is saying the model has moved into a more competitive quality tier while offering more attractive economics and stronger integration with the product developers are already using. Cursor also included a performance-versus-cost chart on its CursorBench benchmarking suite that appears designed to make a Pareto-style argument for Composer 2. In that graphic, Composer 2 sits at a stronger cost-to-performance point than Composer 1.5 and compares favorably with higher-cost GPT-5.4 and Opus 4.6 settings shown by Cursor. The company's message is not simply that Composer 2 scores higher than its predecessor, but that it may offer a more efficient cost-to-intelligence tradeoff for everyday coding work inside Cursor. For readers deciding whether to use Composer 2, the most important question may not be benchmark performance alone. It may be whether they want a model optimized for Cursor's own product experience. That can be a strength. According to the documentation, Composer 2 can access Cursor's agent tool stack, including semantic code search, file and folder search, file reads, file edits, shell commands, browser control and web access. That kind of integration can be more valuable than raw model quality if the goal is to complete real software tasks rather than produce impressive one-shot answers. But it also narrows the addressable audience. Teams looking for a model they can deploy broadly across multiple external tools and platforms should recognize that Cursor is presenting Composer 2 as a model for Cursor users, not as a generally available standalone foundation model. The significance of Composer 2 is not that Cursor has suddenly taken the top spot on every coding benchmark. It has not. The more important point is that Cursor is making an operational argument: its model is getting better, its pricing is low enough to encourage broader use, and its faster tier is responsive enough that the company is comfortable making it the default despite the higher cost. That combination could resonate with engineering teams that increasingly care less about abstract model prestige and more about whether an assistant can stay useful across long coding sessions without becoming prohibitively expensive. Cursor's broader pricing structure helps frame the competitive pressure around this launch. On its current pricing page, Cursor offers a free Hobby tier, a Pro plan at $20 per month, Pro+ at $60 per month, and Ultra at $200 per month for individual users, with higher tiers offering more usage across models from OpenAI, Anthropic and Google. On the business side, Teams costs $40 per user per month, while Enterprise is custom-priced and adds pooled usage, centralized billing, usage analytics, privacy controls, SSO, audit logs and granular admin controls. In other words, Cursor is not just charging for access to a coding model. It is charging for a managed application layer that sits on top of multiple model providers while adding team features, governance and workflow tooling. That model is increasingly under pressure as first-party AI companies push deeper into coding itself. OpenAI and Anthropic are no longer just selling models through third-party products; they are also shipping their own coding interfaces, agents and evaluation frameworks -- such as Codex and Claude Code -- raising the question of how much room remains for an intermediary platform. Commenters on X, while unverified and not necessarily representative of the broader market, have increasingly described moving from Cursor to Anthropic's Claude Code, especially among power users drawn to terminal-first workflows, longer-running agent behavior and lower perceived overhead. Some of those posts describe frustration with Cursor's pricing, context loss or editor-centric experience, while praising Claude Code as a more direct and fully agentic way to work. Even treated cautiously, that kind of social chatter points to the strategic problem Cursor faces: it has to prove that its integrated platform, team controls and now its own in-house models add enough value to justify sitting between developers and the model makers' increasingly capable coding products. That makes Composer 2 strategically important for Cursor. By offering a much cheaper in-house model than Composer 1.5, tuning it tightly to Cursor's own tool stack and making a faster version the default, the company is trying to show that it provides more than a wrapper around outside systems. The challenge is that as first-party coding products improve, developers and enterprise buyers may increasingly ask whether they want a separate AI coding platform at all, or whether the model makers' own tools are becoming sufficient on their own.
[3]
Vibe coding startup Cursor launches programming-optimized Composer 2 model - SiliconANGLE
Vibe coding startup Cursor launches programming-optimized Composer 2 model Cursor today introduced an artificial intelligence model called Composer 2 that it says can outperform Claude Opus 4.6 across many programming tasks. The model is accessible through the company's popular AI code editor. Cursor, which is incorporated as Anysphere Inc,. says that the software has more than 1 million daily active users. That large install base helped the company secure a $29.3 billion valuation last November. Composer 2 supports prompts with up to 200,000 tokens. It can generate code, fix bugs in existing software and interact with a computer's command line interface. Developers can optionally extend the model's capabilities by providing it with access to a browser, an image generator and other tools. Cursor evaluated Composer 2 using an internal benchmark called CursorBench. The programming challenges that it contains are based on tasks completed by the company's engineering team. The average CursorBench challenge includes 352 lines of code spread across 8 files. Composer 2 achieved a score of more than 60%, which put it in third place behind GPT-5.4's high and medium configurations. Those are modes in which OpenAI Group PBC's flagship model uses more hardware to increase output quality. According to Cursor, Composer 2 outperformed GPT 5.4's low configuration and Claude Opus 4.6. Composer 2 also bested Anthropic PBC's model on the Terminal-Bench 2.0 benchmark. The evaluation measures AI models' ability to perform tasks in a command line interface. Cursor says that Composer 2 is more cost-efficient than many competing frontier models. The standard edition of the algorithm is priced at $0.5 per million input tokens and $2.5 per million output tokens. There's also a second, more expensive version that offers the same output quality but responds to developers' prompts considerably faster. It's available for $1.5 per million input tokens and $7.5 per million output tokens. According to Bloomberg, the model's cost-efficiency partly stems from the fact that it was trained solely on coding datasets. Frontier models are usually trained to automate a wider range of tasks, which increases their hardware footprint. Cursor used a machine learning method called self-summarization to streamline the development process. The coding tasks that an AI model receives during training require it to process a significant amount of data. In some cases, the data volume exceeds the model's context window. Self-summarization compresses information into a form that doesn't exceed context window limits.
[4]
Cursor founder clears air on Kimi model use in Composer 2: Here's all you need to know
Users had speculated that Composer 2, a new model designed to improve efficiency in software development workflows, was built on an external base model that was not disclosed at launch. In an X post, cofounder Aman Sanger acknowledged that they had indeed missed mentioning the Kimi base in their blog and added that the company will correct this in future releases. Artificial intelligence (AI) coding startup Cursor is facing scrutiny over its newly launched Composer 2 model after users speculated that the system is built on an external base model that was not disclosed at launch. Cursor unveiled Composer 2 on March 19, a new model designed to improve efficiency in software development workflows. However, Chinese AI startup Moonshot AI had publicly endorsed Cursor's newly launched Composer 2 on Saturday. Further, in a post on X, Cursor cofounder Aman Sanger also confirmed that the company selected Kimi K2.5 after evaluating multiple base models. Moonshot AI develops and owns the Kimi family of models, including Kimi K2.5. "We've evaluated a lot of base models on perplexity-based evals, and Kimi K2.5 proved to be the strongest," Sanger said. He added that Composer 2 is built on top of the base model with further training, fine-tuning using reinforcement learning, and supporting systems that help it run efficiently. Sanger acknowledged that Cursor did not initially disclose its use of the Kimi base model in its launch blog. "It was a miss to not mention the Kimi base in our blog from the start," he said, adding that the company plans to correct this in future releases. Cursor operates in a competitive landscape alongside established players such as OpenAI and Anthropic, as well as a growing number of specialised startups building coding-focussed AI tools. Sanger, who also leads the company's research efforts, had said earlier that Composer 2 is trained specifically on coding-related data. The approach focusses on building a smaller, more specialised model optimised for software engineering tasks. Composer 2 features and pricing Composer 2 is priced at $0.50 per million input tokens and $2.50 per million output tokens, positioning it competitively among coding-focussed AI models. The model is designed for long-horizon coding tasks, enabling it to handle multi-step software problems such as debugging, testing and implementation across larger codebases, the company said in a blog post. Cursor said Composer 2 shows measurable gains over earlier versions, reporting 61.3 on CursorBench, 61.7 on Terminal Bench 2.0, and 73.7 on SWE-bench Multilingual. These benchmarks evaluate performance across areas such as coding accuracy, instruction following, and the capability of the AI model to perform real-world software engineering tasks. How it stacks up against OpenAI, Anthropic In Terminal Bench 2.0, Composer 2 outperformed several competing models. Cursor said OpenAI's GPT model scored 75.1%, while Anthropic's Opus 4.6 recorded 58.0%, placing it below Composer 2 on that benchmark. However, comparisons across models may vary depending on evaluation setup, datasets, and tokenisation methods. Cursor also noted that tokens used by Anthropic's models are approximately 15% smaller than those used by Composer and GPT models, which can affect cost and performance comparisons. Composer is built as a mixture-of-experts model trained with reinforcement learning in real development environments. This training approach differs from many competing systems, which are typically trained on broader datasets and later adapted for coding tasks.
[5]
Cursor Launches Composer 2 To Rival OpenAI, Anthropic
Cursor is launching a new AI model for software development that will reportedly rival OpenAI and Anthropic PBC. Composer 2, a third-gen coding model, is anticipated to exceed Anthropic's Opus 4.6 on several major coding benchmarks, Bloomberg reports. The coding model is designed as an AI agent that can autonomously handle complex, time-consuming coding tasks for users. Sualeh Asif, Aman Sanger, Arvid Lunnemark, and Michael Truell founded Anysphere in 2022. Cursor introduced its AI assistant in 2023, designed to streamline coding and debugging for developers, making their workflow faster and more precise. Anysphere saw extremely fast growth, reaching $100 million in annual recurring revenue (ARR) in January 2025. The company hit a major funding milestone in June 2025, raising $900 million. This round, led by Thrive Capital and with participation from Andreessen Horowitz, Accel, and DST Global, valued the company at approximately $9.9 billion, Bloomberg reported. Cursor is also seeking to raise $50 billion in a new funding round, sources told Bloomberg earlier this month. Sources said that plans are still in the early stages and could change. Cursor now serves over 1 million daily users, including 50,000 businesses, such as the payment processing company Stripe Inc. and the creative software platform Figma Inc. Photo: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[6]
AI coding startup Cursor plans new model to rival Anthropic, OpenAI - The Economic Times
The company on Thursday plans to unveil Composer 2, which is meant to work as an AI agent that carries out lengthy coding tasks on a user's behalf. Anthropic and OpenAI have also introduced more powerful AI models that they say can take on increasingly complicated and time-consuming work writing software.Cursor, a leading artificial intelligence startup for coding, is set to release a more efficient AI model for software development in a bid to keep pace with larger firms like Anthropic PBC and OpenAI. The company on Thursday plans to unveil Composer 2, which is meant to work as an AI agent that carries out lengthy coding tasks on a user's behalf. Anthropic and OpenAI have also introduced more powerful AI models that they say can take on increasingly complicated and time-consuming work writing software. San Francisco-based Cursor launched its first AI coding assistant in 2023 and quickly caught on with professional software developers, leading to a new style of programming known as vibe coding. The company now has more than 1 million daily users, including 50,000 businesses such as payment processing firm Stripe and creative software maker Figma. Cursor has also been in talks to raise a new round of financing at a roughly $50 billion valuation, Bloomberg News reported this month. However, the company faces heated competition from OpenAI, Anthropic and a number of newer startups that offer AI coding assistants designed to field more complex tasks on behalf of the user. Cursor supports a wide range of models, including those from OpenAI and Anthropic, and counts the ChatGPT maker as an investor. Cursor cofounder Aman Sanger, who leads its research team, said the startup focused on training Composer 2 solely on coding-related data -- an effort that let it build a smaller model that's meant to be less expensive to use. Unlike other leading AI developers whose tools are used for a wide range of tasks, Cursor's model is designed purely for coding. "It won't help you do your taxes," Sanger said. "It won't be able to write poems."
Share
Share
Copy Link
AI coding startup Cursor has unveiled Composer 2, a specialized model trained exclusively on coding data that outperforms Claude Opus 4.6 on key benchmarks. Priced at $0.50 per million input tokens, it's 86% cheaper than its predecessor and designed for long-horizon agentic coding tasks. The company later clarified it's built on Moonshot AI's Kimi K2.5 base model after initial omission sparked scrutiny.
Cursor, the San Francisco-based AI coding startup valued at $29.3 billion, has launched Composer 2, a new AI model for software development designed to compete directly with industry giants OpenAI and Anthropic
1
. The AI coding model represents a strategic move by the company to maintain its competitive edge in an increasingly crowded market where AI coding assistants are becoming essential tools for developers2
.
Source: Bloomberg
The programming-optimized model is now available inside Cursor's agentic AI coding environment and serves the company's more than 1 million daily users, including 50,000 businesses such as Stripe Inc. and Figma Inc.
1
.
Source: SiliconANGLE
Aman Sanger, Cursor's co-founder who leads the research team, emphasized that Composer 2 was trained solely on coding-related data, creating a smaller, more specialized model focused exclusively on software development workflows
1
.Composer 2 delivers significant economic advantages over its predecessor, priced at $0.50 per million input tokens and $2.50 per million output tokens—representing an 86% cost reduction compared to Composer 1.5, which cost $3.50 per million input tokens and $17.50 per million output tokens
2
. The company also launched Composer 2 Fast, a higher-priced but faster variant at $1.5 per million input tokens and $7.5 per million output tokens, which is roughly 57% cheaper than Composer 1.52
.The model supports prompts with up to 200,000 tokens and can generate code, fix bugs, and interact with command line interfaces
3
. Performance improvements are substantial: Composer 2 achieved 61.3 on CursorBench, 61.7 on Terminal-Bench 2.0, and 73.7 on SWE-bench Multilingual, compared to Composer 1.5's scores of 44.2, 47.9, and 65.9 respectively2
.On Terminal-Bench 2.0, which measures how well an AI agent performs tasks in command line terminal-style interfaces, Composer 2 outperformed Anthropic's Claude Opus 4.6, which scored 58.0
2
. However, OpenAI's GPT-5.4 still leads at 75.1, positioning Composer 2 as a strong mid-tier competitor rather than the absolute benchmark leader3
.The model's strength lies in its optimization for long-horizon coding tasks—problems requiring hundreds of actions across multiple files, command executions, and iterative debugging
2
. Cursor's documentation describes the model as tuned specifically for tool use, file edits, and terminal operations within its platform, addressing one of the biggest challenges in coding AI: maintaining reliability across extended workflows rather than just isolated code generation2
.Days after launch, Cursor faced scrutiny when users discovered the company had not initially disclosed that Composer 2 was built on Moonshot AI's Kimi K2.5 base model
4
. Aman Sanger acknowledged the oversight in a post on X, stating, "We've evaluated a lot of base models on perplexity-based evals, and Kimi K2.5 proved to be the strongest"4
.
Source: ET
Sanger explained that Composer 2 is built on top of the base model with further training, fine-tuning using reinforcement learning, and supporting systems for efficiency
4
. He admitted, "It was a miss to not mention the Kimi base in our blog from the start," and pledged to correct this in future releases4
. The disclosure gap raises questions about transparency standards in AI development, particularly as companies compete to differentiate their offerings.Related Stories
Cursor employed self-summarization, a machine learning method that compresses information to fit within context window limits during training data processing . The company states that quality gains stem from its first continued pretraining run, which provided a stronger foundation for scaled reinforcement learning focused on agentic coding tasks
2
.Composer 2 can access Cursor's agent tool stack, including semantic code search, file and folder search, file reads and edits, shell commands, browser control, and web access
2
. This tight integration positions the model as a Cursor-native release rather than a broadly distributed standalone offering, which may limit its addressable audience but enhances its effectiveness within the platform's ecosystem2
.Cursor has been in talks to raise a new funding round at approximately $50 billion valuation, though plans remain in early stages
1
5
. The company previously raised $900 million in June 2025 at a $9.9 billion valuation, led by Thrive Capital with participation from Andreessen Horowitz, Accel, and DST Global . The startup reached $100 million in annual recurring revenue in January 2025, demonstrating rapid commercial traction .For developers evaluating whether to adopt Composer 2, the decision hinges less on raw benchmark superiority and more on whether they value a model optimized specifically for Cursor's product experience. The cost-to-performance tradeoff appears favorable compared to frontier models from OpenAI and Anthropic, particularly for teams already embedded in Cursor's development environment. As AI coding tools mature, the competitive landscape will likely reward specialized models that balance performance, cost, and workflow integration rather than pursuing universal benchmark dominance.
Summarized by
Navi
[2]
[3]
30 Oct 2025•Technology

10 Dec 2025•Technology

26 Nov 2024•Technology
