2 Sources
2 Sources
[1]
CodeRabbit raises $60M, valuing the 2-year-old AI code review startup at $550M | TechCrunch
Harjot Gill was running FlexNinja, an observability startup he co-founded several years after selling his first startup Netsil to Nutanix in 2018, when he noticed a curious trend. "We had a team of remote engineers who were starting to adopt AI code generation on GitHub Copilot," Gill told TechCrunch. "We saw that adoption happen, and it was very clear to me that as a second-order effect, it's going to cause bottlenecks in the code review." In early 2023, Gill started CodeRabbit, an AI-powered code review platform, and it acquired FlexNinja. Gill's prediction has come true: developers are now regularly using AI coding assistants to generate code, but the output is often buggy, forcing engineers to spend a lot of time on corrections. CodeRabbit can help catch some of the errors. The business has been growing 20% a month and is now making more than $15 million in annual recurring revenue (ARR), according to Gill. Investors find the startup's growth exciting. On Tuesday, CodeRabbit announced that it raised a $60 million Series B, valuing the company at $550 million. The round, which brought the startup's total funding to $88 million, was led by Scale Venture Partners with participation of NVentures, Nvidia's venture capital arm, and returning investors including CRV. CodeRabbit is helping companies like Chegg, Groupon, and Mercury, along with over 8,000 individual developers, save time on the famously frustrating task of code review, which has become even more time-consuming with the rise of AI-generated code. Since CodeRabbit understands a company's codebase, it can identify bugs and provide feedback, acting like a coworker, Gill said. He added that companies using CodeRabbit can cut the number of humans working on code-review by half. As with most areas of AI, CodeRabbit has competition. Startup rivals include Graphite, which secured a $52 million Series B led by Accel earlier this year, and Greptile, which we reported is in talks for a $30 million Series A round with Benchmark. While leading AI coding assistants like Anthropic's Claude Code and Cursor also offer AI-powered code review capabilities, Gill is betting that customers will prefer a standalone offering in the long term. "CodeRabbit is a lot more comprehensive in terms of depth and technical breadth than bundled solutions," he said. Whether his prediction will turn out to be correct remains to be seen. But for now, thousands of developers are clearly happy to pay CodeRabbit $30 a month. Even with the growing popularity of AI code review tools like CodeRabbit, AI solutions still can't yet be fully trusted to fix the bugs and "unusable" code written by AI. The unreliability of AI-generated code has given rise to a new corporate role: the vibe code cleanup specialist.
[2]
With Vibe Coding AI tools generating more code than ever before, enterprises need quality assurance tools to make sure it all works - here's how to evaluate and choose the right one
Enterprise startup CodeRabbit today raised $60 million to solve a problem most enterprises don't realize they have yet. As AI coding agents generate code faster than humans can review it, organizations face a critical infrastructure decision that will determine whether they capture AI's productivity gains or get buried in technical debt. The funding round, led by Scale Venture Partners, signals investor confidence in a new category of enterprise tooling. The code quality assurance (QA) space is a busy one with GitHub's bundled code review features, Cursor's bug bot, Zencoder, Qodo and emerging players like Graphite in a space that's rapidly attracting attention from both startups and incumbent platforms. The market timing reflects a measurable shift in development workflows. Organizations using AI coding tools generate significantly more code volume. Traditional peer review processes haven't scaled to match this velocity. The result is a new bottleneck that threatens to negate AI's promised productivity benefits. "AI-generated code is here to stay, but speed without a centralized knowledge base and an independent governance layer is a recipe for disaster," Harjot Gill, CEO of CodeRabbit told VentureBeat. "Code review is the most critical quality gate in the agentic software lifecycle." The technical architecture that matters Unlike traditional static analysis tools that rely on rule-based pattern matching, AI code review platforms use reasoning models to understand code intent across entire repositories. The technical complexity is significant. These systems require multiple specialized models working in sequence over 5-15 minute analysis workflows. "We're using around six or seven different models under the hood," Gill explained. "This is one of those areas where reasoning models like GPT-5 are a good fit. These are PhD-style problems." The key differentiator lies in context engineering. Advanced platforms gather intelligence from dozens of sources: code graphs, historical pull requests, architectural documents and organizational coding guidelines. This approach enables AI reviewers to catch issues that traditional tools miss. Examples include security vulnerabilities that emerge from changes across multiple files or architectural inconsistencies that only become apparent with full repository context. Competitive landscape and vendor positioning The AI code review space is attracting competition from multiple directions. Though there are integrated QA capabilities built directly into platform like GitHub and Cursor there is still a need and a market for standalone solutions as well. "When it comes to the critical trust layer, organizations won't go cheap out on that," Gill said. "They will buy the best tool possible." He noted that it's similar in some respects to the observability market where specialized tools like DataDog compete successfully against bundled alternatives like Amazon CloudWatch. Gill's view is validated by multiple industry analysts. "In an era of AI‑assisted development, code review is more important than ever; AI increases code volume and complexity that correspondingly increases code review times and raises the risk of defects," IDC analyst Arnal Dayaratna told VentureBeat. "That reality elevates the value of an independent, platform‑agnostic reviewer that stands apart from the IDE or model vendor." Industry analyst Paul Nashawaty told VentureBeat that CodeRabbit embeds context-aware, conversational feedback directly in developer environments making reviews faster and less noisy for developers. Its ability to learn team preferences and provide in-editor guidance reduces friction and accelerates throughput "That said, CodeRabbit is more of a complement than a replacement," Nashawaty said. "Most enterprises will still pair it with established Static Application Security Testing (SAST)/Source Code Analysis (SCA) tools, which the industry estimates represent a $3B plus market growing at approximately 18% CAGR, for broader rule coverage, compliance reporting and governance." Real-world implementation results The Linux Foundation provides a concrete example of successful deployment. The organization supports numerous open-source projects across multiple programming languages: Golang, Python, Angular and TypeScript. Manual reviews were creating high variance quality checks that missed critical bugs while slowing distributed teams across time zones. The default option for the Linux Foundation before CodeRabbit, was to review code manually. This approach was slow, inefficient and error-prone involving significant time commitment from technical leads, with often two cycles to complete the review. After implementing CodeRabbit, their developers reported a 25% reduction in time spent on code reviews. CodeRabbit caught issues that human reviewers had missed, including inconsistencies between documentation and test coverage, missing null checks, and refactoring opportunities in Terraform files. Evaluation framework for AI code review platforms Industry analysts have identified specific criteria enterprises should prioritize when evaluating AI code review platforms, based on common adoption barriers and technical requirements. Agentic reasoning capabilities: IDC analyst Arnal Dayaratna recommends prioritizing agentic capabilities that use generative AI to explain why changes were made, trace impact across the repository and propose fixes with clear rationale and test implications. This differs from traditional static analysis tools that simply flag issues without contextual understanding. Developer experience and accuracy: Analyst Paul Nashawaty emphasizes balancing developer adoption and risk coverage with focus on accuracy, workflow integration and contextual awareness of code changes. Platform independence: Dayaratna highlights the value of an independent, platform-agnostic reviewer that stands apart from the IDE or model vendor. Quality validation and governance: Both analysts stress pre-commit validation capabilities. Dayaratna recommends tools that validate suggested edits before commit to avoid new review churn and require automated tests, static analysis and safe application of one-click patches. Enterprises need governance flexibility to configure review standards. "Every company has a different bar when it comes to how pedantic and how nitpicky they want the system to be," Gill noted. Proof-of-concept approach: Nashawaty recommends a 2-4 week proof-of-concept on real issues that help to measure developer satisfaction, scan accuracy, and remediation speed rather than relying solely on vendor demonstrations or feature checklists. For enterprises looking to lead in AI-assisted development,it's increasingly foundational to evaluate code review platforms as critical infrastructure, not point solutions. The organizations that establish robust AI review capabilities now will have competitive advantages in software delivery velocity and quality. For enterprises adopting AI development tools later, the lesson is clear: plan for the review bottleneck before it constrains your AI productivity gains. The infrastructure decision you make today determines whether AI coding tools become force multipliers or sources of technical debt.
Share
Share
Copy Link
CodeRabbit, an AI-powered code review platform, secures $60 million in Series B funding, valuing the startup at $550 million. The investment addresses the growing challenges of reviewing AI-generated code and aims to improve software development efficiency.
CodeRabbit, an AI-powered code review platform, has successfully raised $60 million in a Series B funding round, propelling its valuation to an impressive $550 million
1
. This significant investment, led by Scale Venture Partners with participation from NVentures (Nvidia's venture capital arm) and returning investors like CRV, brings the startup's total funding to $88 million1
.Source: VentureBeat
The funding comes at a crucial time when AI coding assistants are generating code at unprecedented rates, often resulting in buggy output that requires extensive human intervention
1
. This trend has created a new bottleneck in the development process, threatening to negate the productivity gains promised by AI2
.Founded in early 2023 by Harjot Gill, CodeRabbit aims to address these challenges by offering an AI-powered code review platform
1
. The startup has experienced rapid growth, with a reported 20% monthly increase and an annual recurring revenue (ARR) exceeding $15 million1
.CodeRabbit's platform utilizes advanced AI models to understand code intent across entire repositories. Unlike traditional static analysis tools, it employs multiple specialized models working in sequence over 5-15 minute analysis workflows
2
. This approach enables the platform to catch issues that traditional tools might miss, such as security vulnerabilities and architectural inconsistencies2
.While CodeRabbit faces competition from integrated solutions like GitHub and Cursor, as well as other startups like Graphite and Greptile, the company is betting on the preference for standalone, specialized tools in critical trust layers
1
2
. Industry analysts support this view, emphasizing the importance of independent, platform-agnostic reviewers in the era of AI-assisted development2
.Related Stories
CodeRabbit has already made significant inroads with notable clients such as Chegg, Groupon, and Mercury, along with over 8,000 individual developers
1
. The Linux Foundation, a prominent user, reported a 25% reduction in time spent on code reviews after implementing CodeRabbit, highlighting the platform's ability to catch issues that human reviewers had missed2
.As AI continues to reshape the software development landscape, tools like CodeRabbit are poised to play a crucial role in maintaining code quality and efficiency. However, the challenge of fully trusting AI solutions to fix bugs and "unusable" code written by AI remains, giving rise to new roles such as the "vibe code cleanup specialist"
1
.Summarized by
Navi
[1]
07 May 2025•Startups
03 Jun 2025•Technology
30 Aug 2024
1
Business and Economy
2
Policy and Regulation
3
Technology