4 Sources
4 Sources
[1]
Cursor 2.0 adds coding model, UI for parallel agents
Cursor's new Composer model, built for low-latency agentic coding, completes most iterations in under 30 seconds, according to Anysphere. Anysphere has introduced Cursor 2.0, an update to the AI coding assistant that features the tool's first coding model, called Composer, and an interface for working with many agents in parallel. Both Cursor 2.0 and Composer were introduced October 29 by the Cursor team at Anysphere. Cursor is a fork of Microsoft's popular Visual Studio Code editor, downloadable at cursor.com for Windows, MacOS, and Linux. Composer is a frontier model that is four times faster than similarly intelligent agent models, the Cursor team said. Built for low-latency agentic coding in Cursor, Composer completes most turns in fewer than 30 seconds, according to the team's own benchmarks. A mixture-of-experts language model that supports long-context generation and understanding, Composer is specialized for software engineering through reinforcement learning in a diverse range of development environments, the Cursor team said. The model was trained with a set of tools including codebase-wide semantic search, which makes it better at understanding and working in large code bases, they added.
[2]
Vibe coding platform Cursor releases Composer, its first LLM
The vibe coding tool Cursor, from startup Anysphere, has introduced Composer, its first in-house, proprietary coding large language model (LLM) as part of its Cursor 2.0 platform update. Composer is designed to execute coding tasks quickly and accurately in production-scale environments, representing a new step in AI-assisted programming. It's already being used by Cursor's own engineering staff in day-to-day development -- indicating maturity and stability. According to Cursor, Composer completes most interactions in less than 30 seconds while maintaining a high level of reasoning ability across large and complex codebases. The model is described as four times faster than similarly intelligent systems and is trained for "agentic" workflows -- where autonomous coding agents plan, write, test, and review code collaboratively. Previously, Cursor supported "vibe coding" -- using AI to write or complete code based on natural language instructions from a user, even someone untrained in development -- atop other leading proprietary LLMs from the likes of OpenAI, Anthropic, Google, and xAI. These options are still available to users. Benchmark Results Composer's capabilities are benchmarked using "Cursor Bench," an internal evaluation suite derived from real developer agent requests. The benchmark measures not just correctness, but also the model's adherence to existing abstractions, style conventions, and engineering practices. On this benchmark, Composer achieves frontier-level coding intelligence while generating at 250 tokens per second -- about twice as fast as leading fast-inference models and four times faster than comparable frontier systems. Cursor's published comparison groups models into several categories: "Best Open" (e.g., Qwen Coder, GLM 4.6), "Fast Frontier" (Haiku 4.5, Gemini Flash 2.5), "Frontier 7/2025" (the strongest model available midyear), and "Best Frontier" (including GPT-5 and Claude Sonnet 4.5). Composer matches the intelligence of mid-frontier systems while delivering the highest recorded generation speed among all tested classes. A Model Built with Reinforcement Learning and Mixture-of-Experts Architecture Research scientist Sasha Rush of Cursor provided insight into the model's development in posts on the social network X, describing Composer as a reinforcement-learned (RL) mixture-of-experts (MoE) model: "We used RL to train a big MoE model to be really good at real-world coding, and also very fast." Rush explained that the team co-designed both Composer and the Cursor environment to allow the model to operate efficiently at production scale: "Unlike other ML systems, you can't abstract much from the full-scale system. We co-designed this project and Cursor together in order to allow running the agent at the necessary scale." Composer was trained on real software engineering tasks rather than static datasets. During training, the model operated inside full codebases using a suite of production tools -- including file editing, semantic search, and terminal commands -- to solve complex engineering problems. Each training iteration involved solving a concrete challenge, such as producing a code edit, drafting a plan, or generating a targeted explanation. The reinforcement loop optimized both correctness and efficiency. Composer learned to make effective tool choices, use parallelism, and avoid unnecessary or speculative responses. Over time, the model developed emergent behaviors such as running unit tests, fixing linter errors, and performing multi-step code searches autonomously. This design enables Composer to work within the same runtime context as the end-user, making it more aligned with real-world coding conditions -- handling version control, dependency management, and iterative testing. From Prototype to Production Composer's development followed an earlier internal prototype known as Cheetah, which Cursor used to explore low-latency inference for coding tasks. "Cheetah was the v0 of this model primarily to test speed," Rush said on X. "Our metrics say it [Composer] is the same speed, but much, much smarter." Cheetah's success at reducing latency helped Cursor identify speed as a key factor in developer trust and usability. Composer maintains that responsiveness while significantly improving reasoning and task generalization. Developers who used Cheetah during early testing noted that its speed changed how they worked. One user commented that it was "so fast that I can stay in the loop when working with it." Composer retains that speed but extends capability to multi-step coding, refactoring, and testing tasks. Integration with Cursor 2.0 Composer is fully integrated into Cursor 2.0, a major update to the company's agentic development environment. The platform introduces a multi-agent interface, allowing up to eight agents to run in parallel, each in an isolated workspace using git worktrees or remote machines. Within this system, Composer can serve as one or more of those agents, performing tasks independently or collaboratively. Developers can compare multiple results from concurrent agent runs and select the best output. Cursor 2.0 also includes supporting features that enhance Composer's effectiveness: * In-Editor Browser (GA) - enables agents to run and test their code directly inside the IDE, forwarding DOM information to the model. * Improved Code Review - aggregates diffs across multiple files for faster inspection of model-generated changes. * Sandboxed Terminals (GA) - isolate agent-run shell commands for secure local execution. * Voice Mode - adds speech-to-text controls for initiating or managing agent sessions. While these platform updates expand the overall Cursor experience, Composer is positioned as the technical core enabling fast, reliable agentic coding. Infrastructure and Training Systems To train Composer at scale, Cursor built a custom reinforcement learning infrastructure combining PyTorch and Ray for asynchronous training across thousands of NVIDIA GPUs. The team developed specialized MXFP8 MoE kernels and hybrid sharded data parallelism, enabling large-scale model updates with minimal communication overhead. This configuration allows Cursor to train models natively at low precision without requiring post-training quantization, improving both inference speed and efficiency. Composer's training relied on hundreds of thousands of concurrent sandboxed environments -- each a self-contained coding workspace -- running in the cloud. The company adapted its Background Agents infrastructure to schedule these virtual machines dynamically, supporting the bursty nature of large RL runs. Enterprise Use Composer's performance improvements are supported by infrastructure-level changes across Cursor's code intelligence stack. The company has optimized its Language Server Protocols (LSPs) for faster diagnostics and navigation, especially in Python and TypeScript projects. These changes reduce latency when Composer interacts with large repositories or generates multi-file updates. Enterprise users gain administrative control over Composer and other agents through team rules, audit logs, and sandbox enforcement. Cursor's Teams and Enterprise tiers also support pooled model usage, SAML/OIDC authentication, and analytics for monitoring agent performance across organizations. Pricing for individual users ranges from Free (Hobby) to Ultra ($200/month) tiers, with expanded usage limits for Pro+ and Ultra subscribers. Business pricing starts at $40 per user per month for Teams, with enterprise contracts offering custom usage and compliance options. Composer's Role in the Evolving AI Coding Landscape Composer's focus on speed, reinforcement learning, and integration with live coding workflows differentiates it from other AI development assistants such as GitHub Copilot or Replit's Agent. Rather than serving as a passive suggestion engine, Composer is designed for continuous, agent-driven collaboration, where multiple autonomous systems interact directly with a project's codebase. This model-level specialization -- training AI to function within the real environment it will operate in -- represents a significant step toward practical, autonomous software development. Composer is not trained only on text data or static code, but within a dynamic IDE that mirrors production conditions. Rush described this approach as essential to achieving real-world reliability: the model learns not just how to generate code, but how to integrate, test, and improve it in context. What It Means for Enterprise Devs and Vibe Coding With Composer, Cursor is introducing more than a fast model -- it's deploying an AI system optimized for real-world use, built to operate inside the same tools developers already rely on. The combination of reinforcement learning, mixture-of-experts design, and tight product integration gives Composer a practical edge in speed and responsiveness that sets it apart from general-purpose language models. While Cursor 2.0 provides the infrastructure for multi-agent collaboration, Composer is the core innovation that makes those workflows viable. It's the first coding model built specifically for agentic, production-level coding -- and an early glimpse of what everyday programming could look like when human developers and autonomous models share the same workspace.
[3]
Cursor 2.0 Lets Developers Run 8 AI Agents in Parallel, Adds Its Own Coding Model | AIM
Cursor has launched version 2.0, with a new multi-agent interface and its first proprietary AI model, Composer, designed for low-latency coding tasks. Cursor 2.0 reorganises the IDE around agents rather than files, allowing users to run up to eight coding agents simultaneously. Each operates in an isolated environment using git worktrees or remote machines to prevent file conflicts. Besides, the update enhances agent collaboration and review processes, making it easier to monitor edits across files and test code within the interface. "With Cursor 2.0, we're making it simple to run many agents in parallel without them interfering with one another," said Cursor. The update also makes sandboxed terminals and a native browser-based testing tool generally available. Enterprises now have new administrative controls for sandboxing, which improve cloud agent reliability and user activity auditing. Cursor 2.0 boosts the performance of language server protocols (LSPs) in Python and TypeScript by dynamically increasing memory limits for larger projects. In addition, the interface now supports voice control, shareable team commands, and improved prompt management, reflecting Cursor's move towards team-wide automation rather than individual code editing. A detailed changelog can be found here. Alongside this, Cursor also released Composer, a mixture-of-experts (MoE) coding model trained via reinforcement learning (RL). The company describes it as "a frontier model that is 4x faster than similarly intelligent models." "The model is built for low-latency agentic coding in Cursor, completing most turns in under 30 seconds. Early testers found the ability to iterate quickly with the model delightful and trust the model for multi-step coding tasks," said Cursor. Composer was trained in real-world environments, with access to tools such as semantic search, terminal commands, and file editing to support agentic workflows. The company built an internal benchmark, Cursor Bench, to measure a model's usefulness to developers, evaluating code quality, correctness, and adherence to existing abstractions. During RL, Composer learned to optimise for speed by minimising redundant responses and parallelising tool use. It was trained using custom infrastructure built on PyTorch and Ray, scaled across thousands of NVIDIA GPUs with MXFP8 kernels for low-precision efficiency. "We trained Composer to make efficient choices in tool use and maximise parallelism whenever possible," said Cursor. The company positions Composer as optimised for agentic coding, combining long-context understanding with responsiveness needed for interactive development. Though models like GPT-5 and Sonnet 4.5 outperform it on some benchmarks, Cursor claims Composer offers the fastest interactive experience among current "fast frontier" coding models. "Cursor builds tools for software engineering, and we make heavy use of the tools we develop. A motivation of Composer development has been developing an agent we would reach for in our own work," said the company. "In recent weeks, we have found that many of our colleagues were using Composer for their day-to-day software development. With this release, we hope that you also find it to be a valuable tool."
[4]
Meet Composer: Cursor's 4x-faster model that writes, tests, and thinks code
Cursor 2.0 integrates writing, testing, and debugging into one agentic loop For years, the promise of AI-assisted coding has been tempered by a simple reality: the latency of the model-human interaction often slows down the overall development loop. A few seconds of waiting here, a few seconds there - it all adds up, breaking the flow state essential for deep work. That's why the release of Cursor 2.0 and its cornerstone model, Composer, marks a significant inflection point, accelerating the agentic coding workflow to an unprecedented degree. Also read: GitHub Agent HQ explained: How it aims to create specialized AI agents for developers The headline feature of Composer is its staggering speed. This is a frontier coding model that is an average of 4x faster than other similarly intelligent models on the market. In a domain where waiting time dictates adoption, this is a revolutionary leap. Composer is engineered specifically for low-latency agentic coding in Cursor, designed to complete most turns and multi-step tasks in under 30 seconds. Early testers have reported that this rapid feedback loop feels "delightful," restoring the sense of immediacy that developers demand. The model's speed transforms what was once a disruptive pause into a collaborative conversation, allowing developers to iterate with the AI quickly and confidently. Composer doesn't just write code; it "thinks" about the entire codebase, making it trustworthy for complex, multi-step tasks. Its intelligence is rooted in how it was trained, equipped with a suite of powerful developer tools, including codebase-wide semantic search. This capability allows the model to deeply understand and navigate large code repositories far more effectively than models limited to token windows alone. When a developer asks Composer to implement a feature, refactor a module, or trace a bug, the model doesn't just generate a naive suggestion. Instead, it leverages semantic search to locate relevant definitions, historical context, and analogous implementations across the entire project. This enables it to produce highly contextual, accurate, and idiomatic code suggestions that truly integrate with the existing architecture. For developers working on massive, entrenched projects, this ability to grasp the "big picture" of the code is perhaps more valuable than pure generation speed alone. Composer's capabilities are maximized by the new Cursor 2.0 interface, which fundamentally shifts the environment from a file-centric IDE to an agent-centric workspace. This design philosophy recognizes that the agent is not merely a suggestion engine but a full-fledged co-developer. Also read: Can ChatGPT really care? OpenAI wants to make AI more emotionally aware, here's how. The most critical advancement in Cursor 2.0 is the direct integration of the "tests code" part of the development loop. As developers increase their reliance on agents, two bottlenecks emerge: reviewing the agent's changes and verifying their functionality. Cursor 2.0 addresses the latter with a native browser tool. This means that after Composer writes a solution, the agent can immediately spin up the environment, run the code, simulate user interactions, and test its own work. If the tests fail or if the result is incorrect, the agent doesn't stop, it receives the failure report and iterates on the code until the correct final result is produced. Furthermore, the new interface is built to support running many agents in parallel, powered by tools like git worktrees. This allows developers to tackle complex problems by having multiple Composer models attempt the task simultaneously, providing a safety net and often improving the final output quality by combining or selecting the best result. Composer, paired with Cursor 2.0, represents a matured vision for agentic development. By providing a model that is 4x faster, capable of semantic reasoning, and integrated with native tools for self-correction and testing, Cursor has moved AI assistance past simple code completion and into a full-cycle development partnership. For the developer, this means less time waiting, less time reviewing, and more time focusing on the architectural challenges that truly matter. The new era of code flow has arrived. Also read: Meet NEO: Humanoid robot designed as your home assistant for ₹18 lakh
Share
Share
Copy Link
Anysphere releases Cursor 2.0 featuring Composer, their first in-house coding model that completes tasks in under 30 seconds. The update introduces multi-agent parallel workflows and native testing capabilities for enhanced developer productivity.
Anysphere has launched Cursor 2.0, marking a significant milestone in AI-assisted development with the introduction of Composer, their first proprietary coding model. The update, announced on October 29, promises to transform developer workflows through unprecedented speed and multi-agent capabilities
1
.
Source: Digit
Composer represents a breakthrough in coding AI performance, delivering results four times faster than similarly intelligent agent models while completing most interactions in under 30 seconds
2
. This speed improvement addresses a critical bottleneck in AI-assisted development, where latency often disrupts developer flow states and reduces productivity4
.Composer utilizes a mixture-of-experts (MoE) architecture trained through reinforcement learning specifically for software engineering tasks. Unlike traditional models trained on static datasets, Composer was developed in real-world coding environments with access to production tools including file editing, semantic search, and terminal commands
2
.The model's training process involved solving concrete engineering challenges within full codebases, optimizing for both correctness and efficiency. Through reinforcement learning, Composer developed autonomous behaviors such as running unit tests, fixing linter errors, and performing multi-step code searches
2
. Research scientist Sasha Rush explained that the team "co-designed both Composer and the Cursor environment to allow the model to operate efficiently at production scale"2
.Cursor 2.0 fundamentally reimagines the development environment by shifting from a file-centric to an agent-centric workspace. The platform now supports up to eight coding agents running simultaneously, each operating in isolated environments using git worktrees or remote machines to prevent conflicts
3
.This multi-agent approach allows developers to tackle complex problems by having multiple Composer instances attempt tasks simultaneously, providing redundancy and often improving final output quality through result combination or selection
4
. The system includes enhanced collaboration and review processes, making it easier to monitor edits across files and coordinate agent activities3
.
Source: InfoWorld
Related Stories
A critical advancement in Cursor 2.0 is the integration of native testing capabilities directly into the development loop. The platform includes a browser-based testing tool that allows agents to automatically test their own code, simulate user interactions, and iterate on failures until achieving correct results
4
.This self-correcting capability addresses a major bottleneck in AI-assisted development: verifying and debugging generated code. When Composer writes a solution, it can immediately spin up the environment, run tests, and refine the implementation based on feedback, creating a truly autonomous development cycle
4
.
Source: VentureBeat
Cursor developed an internal evaluation suite called "Cursor Bench" to measure Composer's performance against real developer agent requests. The benchmark evaluates not just correctness but also adherence to existing abstractions, style conventions, and engineering practices
2
.According to Cursor's benchmarks, Composer achieves frontier-level coding intelligence while generating at 250 tokens per second—approximately twice as fast as leading fast-inference models and four times faster than comparable frontier systems
2
. The model matches the intelligence of mid-frontier systems while delivering the highest recorded generation speed among tested categories2
.Summarized by
Navi
[1]
[2]
[3]
Analytics India Magazine
|