2 Sources
[1]
Greg Brockman says 80% of OpenAI's code is now written by AI
Greg Brockman's comments at Sequoia's AI Ascent 2026 conference fit a pattern of AI lab leaders citing self-reinforcing productivity numbers, but the underlying evidence on AI coding productivity remains substantially more contested than the headline figure suggests. OpenAI president Greg Brockman said AI is now writing roughly 80% of the company's code at Sequoia Capital's AI Ascent 2026 conference on Thursday, according to Business Insider. "It's hard to know what percent is not' being written by AI, Brockman said, echoing a comment he made on the Knowledge Project podcast in late April. The remarks are part of a broader argument Brockman has been making across multiple interviews this month: that AI coding capabilities have crossed a productivity threshold, that AGI is "70-80% there" by his personal definition, and that compute scarcity is now the binding constraint on what AI labs can deliver. The 80% figure is striking but ambiguous. The two stronger interpretations are very different from each other. The first is that AI tools write 80% of the lines of code committed to OpenAI's codebase, a productivity claim. The second is that AI is involved in some way (autocomplete, refactoring suggestion, generation followed by human revision) in 80% of the coding work, a usage claim. Brockman's qualifier, "it's hard to know what percent is not", aligns more closely with the second interpretation, and the gap between the two is large enough to materially alter what the figure means. Brockman is not alone in citing high AI-coding figures. Anthropic CEO Dario Amodei said publicly last year that AI was writing 90% of code at Anthropic, with a target of 100% within months. Cursor reached $2 billion in annualised revenue within three years on the strength of AI-assisted coding workflows; GitHub Copilot has 4.7 million paid subscribers and 90% adoption among the Fortune 100; and Anthropic's $30 billion run-rate revenue is, by the company's own description, overwhelmingly concentrated in coding, enterprise search, and general productivity. The pattern is consistent: the labs producing the underlying models are reporting that those models are transformative for software engineering. The deeper context is one Brockman articulated more clearly in his early-April Big Technology podcast interview. He described a 'December 2025 inflection' in which models went from being able to do roughly 20% of typical engineering tasks to roughly 80%, a shift he characterised as "you absolutely need to retool your workflow around these AIs." He cited an OpenAI engineer who had previously been unable to get AI to handle low-level systems engineering and now hands the model a design document and watches it implement, instrument, and profile the resulting system to production quality. There is, however, a significant body of work questioning whether internal AI-coding productivity numbers should be taken at face value. A February 2026 paper from the National Bureau of Economic Research found that 80% of companies actively using AI reported no measurable impact on productivity. A widely cited 2025 MIT study concluded that 95% of corporate AI pilot programmes generated zero return on investment. Machine learning engineer Han-Chung Lee has argued in a widely circulated GitHub post that even rosy internal AI productivity numbers should be treated with skepticism, because they are typically produced to hit adoption targets that no one can independently audit. The independent academic critique has been sharpest from cognitive scientist Gary Marcus, who has called the broader AGI claims "a trillion-dollar delusion." "We as a society are placing truly massive bets around the premise that AGI is close," Marcus said in a recent keynote at the Royal Society in London. "Large language models are deeply flawed imitators that are preying on the Eliza effect." Marcus' specific point about coding is structurally important: a model that produces code which compiles and passes the tests it was given is not the same as a model that produces correct, secure, maintainable, well-architected software. The first is verifiable in seconds; the second requires the kind of judgement that has been the historical bottleneck on engineering productivity. Brockman acknowledges the gap, even as he argues it is closing. "The technology we have right now is very jagged," he said in the Big Technology interview. "It is absolutely superhuman at many tasks. When it comes to writing code, those kinds of things, the AI can just do it. But there's some very basic tasks that a human can do that our AI still struggles with." Two things make Brockman's 80% figure particularly worth examining at this moment. The first is the financial scale of OpenAI's current capital deployment. The company raised $122 billion in 2026 and is targeting an IPO at potentially $1 trillion. Brockman has been explicit that the central question for OpenAI is no longer model capability but compute scarcity. Compute is now "a revenue centre, not a cost centre," he has said, and OpenAI is committing essentially all available capital to it. That capital deployment is being justified, in significant part, by exactly the kind of productivity claims he is making about AI coding. The second is the labour market context. Tech companies have laid off thousands of engineers over the past two years, with management increasingly citing AI-driven productivity gains as the rationale. If AI is genuinely doing 80% of the coding at companies like OpenAI and Anthropic, the labour market consequences are substantial. If the figure reflects a less robust reality, AI being involved in some workflow stage in most coding tasks, but not actually replacing 80% of engineering effort, then the layoffs may be running ahead of the actual productivity gains, and the long-term human cost of the gap may be considerable. There is one additional layer to Brockman's framing worth noting: he himself, by his own description and in TIME's 100 Most Influential People in AI profile, spends approximately 80% of his working time coding, between 60 and 100 hours per week. The man making the claim that AI now writes 80% of the company's code is also, by reputation, the company's most prolific human coder. Whether that makes him the most credible witness to the productivity shift or the most invested in believing in it depends on which framing of the figure one accepts.
[2]
OpenAI's Greg Brockman Says AI Went From Writing 20% To 80% Of Code In A Single Month - Alphabet (NASDAQ:
OpenAI President Greg Brockman said AI coding tools leaped from writing 20% to 80% of developer code in a single month, marking a fundamental shift from productivity aid to primary software development driver. "We went from these agentic coding tools writing 20% of your code to writing 80% of your code," Brockman said at a Sequoia Capital event, describing the change seen within December alone. "They go from being kind of a sideshow to being the main thing that you're doing," he told Sequoia partner Alfred Lin. Big Tech Is Already Living This Reality Last month, former OpenAI researcher Andrej Karpathy said he had not personally typed a line of code since December, delegating all programming tasks to AI agents. Human Oversight Remains Non-Negotiable Not everyone is bullish on speed alone. Venture capitalist Chamath Palihapitiya warned that faster AI coding means little without capturing the reasoning behind engineering decisions. Brockman cautioned against blind adoption, stressing that at OpenAI, a human must still sign off on all AI-generated code before it is merged. "We still want a human to be accountable for all code that gets merged," Brockman said. Brockman is currently carrying added responsibility at OpenAI, stepping in to oversee product after Chief of Applications Fidji Simo took medical leave. Photo Courtesy: Shutterstock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
Share
Copy Link
OpenAI president Greg Brockman revealed that AI coding tools now write roughly 80% of the company's code, a dramatic leap from 20% in just one month last December. While AI labs tout transformative productivity gains, independent research questions whether these internal metrics translate to measurable business impact, with some studies showing zero ROI from AI adoption.
OpenAI president Greg Brockman disclosed at Sequoia Capital's AI Ascent 2026 conference that AI is now writing roughly 80% of the company's code, marking what he describes as a fundamental shift in software development workflows
1
. Speaking to Sequoia partner Alfred Lin, Brockman explained that AI coding tools leaped from handling 20% to 80% of developer code within a single month—specifically December 20252
. "It's hard to know what percent is not being written by AI," Brockman said, echoing comments he made on the Knowledge Project podcast in late April1
. The statement positions AI as having crossed a threshold from productivity aid to primary driver of contribution to software development.
Source: Benzinga
The claim that AI writes 80% of code carries significant ambiguity that shapes its interpretation. Two distinct readings emerge: either AI tools write 80% of the lines of code committed to OpenAI's codebase, or AI is involved in some capacity—autocomplete, refactoring suggestions, generation followed by human revision—in 80% of coding work
1
. Brockman's qualifier about difficulty determining what percentage is not AI-written aligns more closely with the usage interpretation rather than a pure productivity claim. This distinction matters considerably when evaluating the productivity impact of AI in coding, as involvement differs substantially from autonomous generation. Brockman described a December 2025 inflection point where models went from handling roughly 20% of typical engineering tasks to roughly 80%, a shift he characterized as requiring teams to "absolutely retool your workflow around these AIs"1
.Greg Brockman is not alone in citing high AI coding productivity figures. Anthropic CEO Dario Amodei publicly stated last year that AI was writing 90% of code at Anthropic, with a target of reaching 100% within months
1
. GitHub Copilot has reached 4.7 million paid subscribers with 90% adoption among the Fortune 100 companies, while Cursor achieved $2 billion in annualized revenue within three years on the strength of AI-assisted coding workflows1
. Anthropic's $30 billion run-rate revenue concentrates overwhelmingly in coding, enterprise search, and general productivity applications. Former OpenAI researcher Andrej Karpathy stated last month that he had not personally typed a line of code since December, delegating all programming tasks to AI agents2
. The pattern suggests AI labs producing the underlying models report those models as transformative for software engineering.Despite the aggressive adoption of AI-generated code, human oversight remains non-negotiable at OpenAI. Brockman stressed that a human coder must still sign off on all AI-generated code before it gets merged into production systems. "We still want a human to be accountable for all code that gets merged," Brockman cautioned
2
. This requirement acknowledges that critical human judgment remains essential for evaluating code quality beyond mere compilation. Brockman is currently carrying added responsibility at OpenAI, stepping in to oversee product after Chief of Applications Fidji Simo took medical leave2
.Related Stories
A significant body of independent research questions whether internal AI coding productivity numbers should be taken at face value. A February 2026 paper from the National Bureau of Economic Research found that 80% of companies actively using AI reported no measurable impact on productivity
1
. A widely cited 2025 MIT study concluded that 95% of corporate AI pilot programs generated zero return on investment. Machine learning engineer Han-Chung Lee argued in a widely circulated GitHub post that even optimistic internal AI productivity numbers should be treated with skepticism, as they are typically produced to hit adoption targets that no one can independently audit1
. Cognitive scientist Gary Marcus has called broader AGI claims "a trillion-dollar delusion," stating that "large language models are deeply flawed imitators that are preying on the Eliza effect"1
. Marcus emphasizes that a model producing code that compiles and passes tests differs fundamentally from one producing correct, secure, maintainable, well-architected software—the latter requiring the kind of judgment that has been the historical bottleneck on engineering productivity.The tension between AI lab claims and independent research creates uncertainty about AI coding's actual impact on engineering productivity. Brockman himself acknowledges limitations, describing current technology as "very jagged"—"absolutely superhuman at many tasks" including writing code, yet struggling with "some very basic tasks that a human can do"
1
. The stakes are substantial given OpenAI's financial scale: the company raised $122 billion in 2026 and is targeting an IPO at potentially $1 trillion valuation1
. Brockman has been explicit that the central question for OpenAI is no longer model capability but compute scarcity as the binding constraint. Venture capitalist Chamath Palihapitiya warned that faster AI coding means little without capturing the reasoning behind engineering decisions2
. As AI coding tools continue advancing, the industry faces a critical question: whether the productivity claims from AI labs will translate into measurable business outcomes or remain largely confined to internal metrics that resist independent verification.Summarized by
Navi
[1]
12 Mar 2025•Technology

21 Apr 2026•Technology

30 Oct 2024•Technology

1
Entertainment and Society

2
Health

3
Technology
