4 Sources
4 Sources
[1]
The Big Bang: A.I. Has Created a Code Overload
When a financial services company recently began using Cursor, an artificial intelligence technology that writes computer code, the difference that it made was immediate. The company went from producing 25,000 lines of code a month to 250,000 lines. That created a backlog of one million lines of code that needed to be reviewed, said Joni Klippert, a co-founder and the chief executive of StackHawk, a security start-up that was working with the financial services firm. "The sheer amount of code being delivered, and the increase in vulnerabilities, is something they can't keep up with," she said. And as software development moved faster, that forced sales, marketing, customer support and other departments to pick up the pace, Ms. Klippert added, creating "a lot of stress." Since A.I. coding tools from Anthropic, OpenAI, Cursor and other companies took off last year, one result has now become apparent: code overload. Aided by these tools, tech workers are producing so much code so quickly that it has become too much to handle. With anyone -- not just engineers -- able to spin up software ideas in a matter of hours, companies are trying to figure out how to deal with the glut. In Silicon Valley, many tech workers see this moment as a new reality they must adapt to as companies incorporate A.I. tools into daily work. Some said the tools granted them coding superpowers, allowing them to spend more time coming up with software ideas instead of doing the arduous work of building it. At the same time, there are not enough engineers to review the explosion of code for mistakes. Recruiters are increasingly looking to hire senior engineers who have experience spotting errors in code and can monitor the software for risks. Open source software projects, which anyone can contribute to, have been inundated with A.I.-enabled additions. And sometimes flaws in the code can lead to security vulnerabilities or software that crashes. "The blessing and the curse is that now everyone inside your company becomes a coder," said Michele Catasta, the president and head of A.I. at Replit, an A.I. coding start-up in Foster City, Calif. In a survey released by Google in September, the company found that 90 percent of software developers reported using A.I. to help them work, while 71 percent who write code used A.I. to help them. The widespread use of the tools has led to fears that A.I. can replace many engineers. Tech companies including Pinterest, Block and Atlassian have cut thousands of jobs in recent months, citing efficiencies created by A.I. "Projects that once required hundreds of engineers can now be done by tens," Andrew Bosworth, Meta's chief technology officer, told employees this year in an internal memo, which was reviewed by The New York Times. "Work that used to take months can now take days." He added that A.I. had "profound consequences for how organizations like Meta should work." Not long ago, the process of turning ones and zeros into computer programs was very different. Engineers pored over complicated computer languages to commit them to memory. They might write a few dozen lines of vetted, bulletproof code a day. But A.I. advancements brought the rise of agents, a kind of A.I.-powered robot that can create software largely on its own. Early versions of these agents, including from start-ups like Cursor, showed promise. Then in November, coding agents leveled up. Anthropic and OpenAI, the leading A.I. start-ups, released updated versions of the software that powers their respective coding tools, Claude Code and Codex. The change, tech workers soon discovered, upgraded the agents from occasionally helpful engineering partners to full-fledged code-generating wizards. With just a little human guidance, an engineer could set an A.I. agent loose writing a program in a fraction of the time that human coders would need. What came next was a deluge of code. Many tech companies are now dealing with the ripple effects. Someone has to review the A.I.-generated code to test it for bugs, security and compliance. But it can sometimes be unclear whose job it is to fix issues created by A.I.-generated code. In the past, it would be the responsibility of the person who created the code. Companies are struggling to hire enough people to monitor the A.I. code for risks, a role called application security engineer. "There are not enough application security engineers on the planet to satisfy what just American companies need," said Joe Sullivan, an adviser to Costanoa Ventures, a Silicon Valley venture firm. The large companies he works with would add five to 10 more people in this role if they could, he said. Other problems are quirkier. A.I. coding tools work better on laptops than in web-based environments stored on secure servers owned by companies like Amazon and Microsoft. That means more engineers are downloading their entire company's code to their laptops, creating a security risk if the laptop goes missing, Mr. Sullivan said. "That's an example of a crazy risk no one thought of six months ago that they're trying to solve right now," he said. Sachin Kamdar, a co-founder of Elvex, an A.I. agent start-up, said he created a rule around 16 months ago that all of the company's code needed to be reviewed by a human. Otherwise, problems would be harder to fix because no one would understand the work that A.I. had done. "It's just going to break something, and they're not going to know why it broke," he said. At companies that accept code contributions, the A.I. effect has become clear. Steve Ruiz, founder of the digital whiteboard start-up Tldraw, said he first noticed last fall that more people were trying to add to the code base of his company. (Tldraw licenses its technology, but it publishes its code and takes contributions to it.) Mr. Ruiz said the new contributors acted oddly. Some did all the work but abandoned the code just before signing a form at the end of the process. Others ignored clear instructions or contributed a spammy barrage of updates. Mr. Ruiz concluded the contributors were probably A.I. bots, which were too much to manage. In January, he closed tldraw to outsiders. "The risk to the code base was very high," he said, adding that the onslaught could have put his team, its community and the project's reputation in jeopardy. Open source projects and coding platforms like GitHub are figuring out how to handle the new reality, he said. For some in Silicon Valley, the solution to the code bloat seems obvious: more A.I. Anthropic and OpenAI pointed to recent product releases, including their A.I.-powered software review agents that are used to spot errors in code. (The Times has sued OpenAI and Microsoft for copyright infringement of news content related to A.I. systems. The companies have denied those claims.) In December, Cursor bought Graphite, a start-up that builds code-reviewing bots. The company is incorporating Graphite's technology into an offering to help engineers prioritize the most sensitive code that needs vetting. Tido Carriero, Cursor's head of engineering, product and design, said the most advanced companies had figured out what to do about the A.I.-generated code eruption and were now focused on adapting their businesses to this new way of working. Cursor is building products to help with that, too, he said. "The software development factory kind of broke," he said. "We're trying to rearrange the parts in some sense."
[2]
"They operate like slot machines": AI agents are scrambling power users' brains
The big picture: The most popular agentic AI systems have triggered something that looks a lot like addiction among some of tech's highest performers. Catch up quick: Agentic coding tools like Anthropic's Claude Code, OpenAI's Codex and the open-source tool OpenClaw can write, test and ship software autonomously. * The developer prompts, watches, reviews and then prompts again. * It sounds great. Until it isn't. What they're saying: OpenAI co-founder Andrej Karpathy -- coiner of the term "vibe coding" -- told the No Priors podcast he's been in a "state of AI psychosis" since December, trying to figure out what's possible and "pushing it to the limit." * Karpathy says his ratio of hand-written to AI-delegated code flipped from 80/20 to 0/100 in December. * He now spends 16 hours a day issuing commands to agent swarms. * Karpathy pays a monthly subscription fee and when he has tokens left over near the end of the month he says he "feel[s] extremely nervous" and rushes to exhaust his supply in order to keep up with everyone else. Y Combinator CEO Garry Tan has called his experience grinding with coding tools "cyber psychosis" and posted in January that he "stayed up 19 hours yesterday and didn't sleep til 5AM." * In response to a startup founder bragging that his CTO hadn't slept in 36 hours, Tan said: "This is unhealthy by the way (speaking from experience)." AI developer and blogger Simon Willison, who has 25 years of pre-AI coding experience, said on Lenny's Podcast: "There is a limit on human cognition, in how much you can hold in your head at one time. And it's very easy to pop that stack at the moment." * Developers need to know their own limits and figure out responsible ways to prevent burnout, he says. Choosing agentic coding over sleep is "obviously unsustainable." Allen Institute for Artificial Intelligence research scientist and assistant professor at Carnegie Mellon University -- Tim Dettmers -- says peak productivity comes from working with as many agents as possible in parallel, and that requires near-constant context switching, which humans aren't great at. * "Part of the draw is that agents expand what feels possible, but at the same time they really amplify this ongoing tension around focus and mental bandwidth," Dettmers tells Axios. The intrigue: Work with agentic coding tools is starting to look less like a fun quirk and more like a pathology. * There are elements of gambling and addiction in the way people are using these tools, Willison said on Lenny's Podcast. * "Many of us got hit by the agent coding addiction. It feels good, we barely sleep, we build amazing things," software developer Armin Ronacher wrote in January. Between the lines: Researchers from Boston Consulting Group and UC Riverside call the phenomenon "brain fry:" mental fatigue from excessive use or oversight of AI tools beyond one's cognitive capacity. * Their study published in Harvard Business Review found that "AI-associated mental strain carries significant costs in the form of increased employee errors, decision fatigue, and intention to quit." * Companies view token use as a sign of productivity, according to Harvard Business Review. Reality check: Serious coders have always been "locked in" to meet deadlines and pulling all nighters is nothing new. * Elon Musk and his crews have been sleeping at the office and on factory floors for years, but at least they slept. Zoom in: Quentin Rousseau, CTO and co-founder of the incident management platform Rootly, told Axios he couldn't sleep for months after switching to agentic coding. Eventually he needed a doctor to prescribe sleep medication just to shut his brain off at night. * Rousseau calls himself an AI accelerationist, but warns that people need to use agentic tools carefully because they're designed to be addictive. * "They operate like slot machines," he said. "You hit one prompt, you get an answer, you get some coding done." But then, Rousseau says, sometimes the agent will fail miserably. * He added that founders are, "by default," more addicted to these productivity tools. "We're probably the first people to be collateral of these systems," he told Axios. What we're watching: If software developers are a bellwether for AI burnout, brain fry could be coming for us all.
[3]
Life with AI causing human brain 'fry'
New York (AFP) - Heavy users of artificial intelligence report being overwhelmed by trying to keep up with and on top of the technology designed to make their lives easier. Too many lines of code to analyze, armies of AI assistants to wrangle, and lengthy prompts to draft are among the laments by hard-core AI adopters. Consultants at Boston Consulting Group (BCG) have dubbed the phenomenon "AI brain fry," a state of mental exhaustion stemming "from the excessive use or supervision of artificial intelligence tools, pushed beyond our cognitive limits." The rise of AI agents that tend to computer tasks on demand has put users in the position of managing smart, fast digital workers rather than having to grind through jobs themselves. "It's a brand-new kind of cognitive load," said Ben Wigler, co-founder of the start-up LoveMind AI. "You have to really babysit these models." People experiencing AI burnout are not casually dabbling with the technology -- They are creating legions of agents that need to be constantly managed, according to Tim Norton, founder of the AI integration consultancy nouvreLabs. "That's what's causing the burnout," Norton wrote in an X post. However, BCG and others do not see it as a case of AI causing people to get burned out on their jobs. A BCG study of 1,488 professionals in the United States actually found a decline in burnout rates when AI took over repetitive work tasks. Coding vigilance For now, "brain fry" is primarily a bane for software developers given that AI agents have excelled quickly at writing computer code. "The cruel irony is that AI-generated code requires more careful review than human-written code," software engineer Siddhant Khare wrote in a blog post. "It is very scary to commit to hundreds of lines of AI-written code because there is a risk of security flaws or simply not understanding the entire codebase," added Adam Mackintosh, a programmer for a Canadian company. And if AI agents are not kept on course by a human, they could misunderstand an instruction and wander down an errant processing path, resulting in a business paying for wasted computing power. 'Irritable' Wigler noted that the promise of hitting goals fast with AI tempts tech start-up teams already prone to long workdays to lose track of time and stay on the job even deeper into the night. "There is a unique kind of reward hacking that can go on when you have productivity at the scale that encourages even later hours," Wigler said. Mackintosh recalled spending 15 consecutive hours fine-tuning around 25,000 lines of code in an application. "At the end, I felt like I couldn't code anymore," he recalled. "I could tell my dopamine was shot because I was irritable and didn't want to answer basic questions about my day." A musician and teacher who asked to remain anonymous spoke of struggling to put his brain "on pause", instead spending evenings experimenting with AI. Nonetheless, everyone interviewed for this story expressed overall positive views of AI despite the downsides. BCG recommends in a recently published study that company leaders establish clear limits regarding employee use and supervision of AI. However, "That self-care piece is not really an America workplace value," Wigler said. "So, I am very skeptical as to whether or not its going to be healthy or even high quality in the long term."
[4]
The AI burn out: How and why is brain fry entering the lexicon of AI coders
Developers are experiencing burnout from intense AI coding sessions. Tools like Claude Code and OpenAI's Codex are generating vast amounts of code, overwhelming even top engineers. This 'brain fry' leads to mental fatigue and slower decision-making. Some researchers have quit leading AI firms citing exhaustion. Organizations are exploring strategies to mitigate this growing issue. Running a startup could throw your schedule into chaos. Yet, Kalyan Sivasailam, founder, 5C Network, an AI-based radiology platform, made it home by 10 pm most of the days and could tell himself "Done for the day." But that is no longer the case. After coming home, he now fires up his computer; multiple projects that he works on - from AI interview platform for radiologists to building AI agentic workflow for his startup - come alive in the dead of night. Sivasailam runs five Claude Code instances, one Google Antigravity and one OpenAI's Codex. "There is so much code being generated that you can easily get lost in building and verifying them," he told ET. Sivasailam is part of a growing number of builders, who are spending more time than ever in intense coding sessions that results in burnout for some. He says this fatigue is quite widespread in his network. He sees some of the top engineers in his team are burning themselves out. This cognitive overload has overwhelmed even the best of engineers working in frontier labs. Multiple people working in OpenAI and xAI have quit in recent months after feeling totally drained. On February 26, OpenAI researcher Hieu Pham posted on X that he was leaving the firm due to burnout. "I cannot believe I would say this one day, but I am burnt out. All the mental health deteriorating that I used to scoff at is real, miserable, scary, and dangerous," he wrote on X. He said that he would take a break and move his family to home country Vietnam, where he wanted to try something new and allow himself time to heal. A couple of weeks later, another researcher Haotian Liu announced he was quitting Elon Musk's xAI after two years of intense work. He helped build the video generation model grok imagine. "...shipping it as a great product used by millions, all within 6 months, at age 28: I feel proud. But now it's time for me to move on. I'm burnt out..," Liu wrote on X. The brain fry A recent The Harvard Business Review report, authored by CXOs at Boston Consulting Group, has termed this as 'brain fry' referring to the mental fatigue from excessive use of AI tools beyond one's cognitive ability, with 14% of the workers reporting mental fog, headaches and slower decision-making. Multiple founders and industry watchers ET spoke to shared that for power users of the technology, the work had only intensified instead of easing. As technology has improved productivity, more complex workloads have been thrown at developers. Unlike in the past, the number and kind of decisions a developer needs to make has changed, explains Prasanna Krishnamoorthy, managing partner, Upekkha, an AI accelerator. "Earlier, these developers were making coding decisions. But now with AI doing most of the actual coding, the kind of decisions they make are changing, and that is also creating burnout," he said. Before the advent of AI coding tools, developers were making decisions about what language and datasets to use. Now that has given way to higher level decisions about the kind of architecture, design and products to be built, which were usually taken by senior developers, and often within a short time adding to the stress. "Given the pace of developments, all these decisions need to be taken quickly," he added. That is not the only problem. For many developers, who are exploring AI and building side projects , these tools are often addictive. Ashwin (name changed to protect identity), an AI researcher at one of the frontier AI companies, usually builds a couple of hobby projects on the side. But in the past year, he was able to build at least 4-5 side projects due to the sheer capabilities of these models. "It is like an addiction, because you are seeing how the code works and the more it does, the more you want to continue, leading to burnout," he added. Since you are not looking for an outcome but building it as a toy project, this becomes addictive like gambling, says Krishnamoorthy. What can be done? Ashwin has now started taking breaks between coding marathons. "Earlier I would go for weeks working on one side project after another, till 2 am. But now, I take a break for a week or two between intense coding sessions." 5C's Sivasailam says he sits down with his team to talk about this topic. "The first thing to do is reset the mental framework on what can be done using AI," he says. One of the challenges is that for many senior developers it is harder to accept the shift that the job boils down to managing AI agents instead of writing code. The study by Harvard Business Review points out that where one is using AI tools is also important. It noted that using AI to reduce repetitive tasks see 15% lower burnout rates compared to those who do not use AI. "At the organizational level, directionally, practices like providing clear AI strategy and offering training seemed to help," the report noted.
Share
Share
Copy Link
AI coding tools from Anthropic and OpenAI have increased code production from 25,000 to 250,000 lines per month at some companies. But the productivity surge comes at a cost: developers report AI brain fry, spending 16-hour days managing AI agents, losing sleep, and experiencing mental fatigue that resembles addiction.
AI coding tools have fundamentally altered how software developers work, but not always for the better. When a financial services company deployed Cursor, an AI coding tool, monthly code production skyrocketed from 25,000 lines to 250,000 lines, creating a backlog of one million lines requiring review
1
. This code overload represents a new reality for tech companies as agentic AI coding tools like Anthropic's Claude Code, OpenAI's Codex, and open-source alternatives gain widespread adoption.
Source: NYT
According to a Google survey from September, 90 percent of software developers now use AI to help them work, while 71 percent who write code rely on AI assistance
1
. The increased coding productivity has created what Joni Klippert, CEO of security startup StackHawk, describes as overwhelming stress across entire organizations, forcing sales, marketing, and customer support departments to accelerate their pace1
.The mental exhaustion from AI has manifested in what Boston Consulting Group researchers call "AI brain fry"—a state of cognitive overload stemming from excessive use or supervision of AI tools beyond human cognitive limits
3
. OpenAI co-founder Andrej Karpathy revealed he entered a "state of AI psychosis" in December, spending 16 hours daily issuing commands to agent swarms2
. His ratio of hand-written to AI-delegated code flipped from 80/20 to 0/100 within months. Y Combinator CEO Garry Tan described his experience as "cyber psychosis," posting in January that he stayed up 19 hours and didn't sleep until 5AM2
.
Source: ET
The addictive nature of AI tools has become particularly concerning. Quentin Rousseau, CTO of incident management platform Rootly, couldn't sleep for months after switching to agentic coding and eventually required prescription sleep medication
2
. "They operate like slot machines," Rousseau explained, describing how developers hit one prompt, get an answer, complete some coding, but then the agent sometimes fails miserably2
.The mental fatigue extends beyond simple overwork. AI developer Simon Willison, with 25 years of pre-AI coding experience, notes that human cognition has limits in how much can be held in the head at one time, and "it's very easy to pop that stack at the moment"
2
. Tim Dettmers, research scientist at Allen Institute for Artificial Intelligence and assistant professor at Carnegie Mellon University, explains that peak productivity requires working with as many AI agents as possible in parallel, demanding near-constant context switching—something humans struggle with2
. A study by Boston Consulting Group and UC Riverside found that "AI-associated mental strain carries significant costs in the form of increased employee errors, decision fatigue, and intention to quit"2
. The research revealed that 14% of workers report mental fog, headaches, and slower decision-making4
.The explosion of AI-generated code has exposed critical infrastructure gaps. Companies struggle to hire enough application security engineers to review code for bugs, security vulnerabilities, and compliance issues
1
. "There are not enough application security engineers on the planet to satisfy what just American companies need," said Joe Sullivan, adviser to Costanoa Ventures, noting that large companies would add five to 10 more people in this role if they could1
. Software engineer Siddhant Khare captured the paradox: "The cruel irony is that AI-generated code requires more careful review than human-written code"3
. Adam Mackintosh, a Canadian programmer, described spending 15 consecutive hours fine-tuning around 25,000 lines of code, ending the session feeling unable to code anymore with depleted dopamine levels3
.Related Stories
The efficiency gains have triggered workforce reductions across tech companies. Pinterest, Block, and Atlassian have cut thousands of jobs in recent months, citing efficiencies created by AI
1
. Meta's chief technology officer Andrew Bosworth told employees that "projects that once required hundreds of engineers can now be done by tens" and "work that used to take months can now take days," adding that AI has "profound consequences for how organizations like Meta should work"1
. The shift has fundamentally changed developer roles. Prasanna Krishnamoorthy, managing partner at AI accelerator Upekkha, explains that developers now make higher-level decisions about architecture, design, and products—tasks previously reserved for senior developers—rather than coding decisions about language and datasets4
.The burnout has driven researchers from leading AI firms. OpenAI researcher Hieu Pham quit in February, posting that "all the mental health deteriorating that I used to scoff at is real, miserable, scary, and dangerous"
4
. Weeks later, Haotian Liu left Elon Musk's xAI after helping build the video generation model Grok Imagine, stating he was "burnt out"4
. Boston Consulting Group recommends that company leaders establish clear limits regarding employee use and supervision of AI agents3
. Some developers have adopted personal strategies: one AI researcher now takes week-long breaks between intense coding sessions after previously working on projects until 2AM for weeks at a time4
. If software developers serve as a bellwether for AI burnout, this phenomenon could extend across all knowledge work as AI tools proliferate2
.
Source: Axios
Summarized by
Navi
[3]
07 Apr 2026•Technology

21 Feb 2026•Technology

10 Feb 2026•Business and Economy

1
Policy and Regulation

2
Technology

3
Technology
