4 Sources
4 Sources
[1]
A.I. Isn't Coming for Every White-Collar Job. At Least Not Yet.
Cade Metz has covered artificial intelligence for more than 15 years In January, Perry Metzger, a computer programmer living outside Boston, tested the limits of an artificial intelligence technology called Codex. Built by OpenAI, the maker of ChatGPT, Codex can write computer code in much the same way that chatbots generate text in plain English. Using this A.I. technology, Mr. Metzger and his business partner, another seasoned programmer, designed an online word processor along the lines of Google Docs or Microsoft Word. If he and his partner had done the coding on their own, Mr. Metzger said, they would have needed at least two months to build this complex piece of software. With Codex, they finished in two days. "You have to keep a close eye on what it is doing and make sure it doesn't make mistakes, and create ways of testing the code," said Mr. Metzger, who has been building software since he was a teenager in the 1970s. "But you can move at a speed that was unimaginable in the past." Codex is among a new wave of A.I. code generators that are rapidly changing the way people build software. Experienced programmers like Mr. Metzger are shocked by how powerful these systems have become in recent months after a series of improvements from OpenAI and its many rivals, including start-ups like Anthropic and tech giants like Google. "I used to do the coding and they would help me do the work," Mr. Metzger said of technologies like Codex. "Now, I supervise them as they do the work." In early February, this phenomenon incited a sell-off on Wall Street, as investors predicted that code generators would undermine companies that have spent decades building software without help from A.I. In the days that followed, many people began to worry that this kind of technology would quickly replace programmers en masse -- and that similar systems would soon supplant other office workers, too. But even as code generators demonstrate the growing power of artificial intelligence, they require extensive oversight, according to interviews with more than 50 A.I. researchers, experienced programmers, security experts and others who have built, used and examined these technologies over the past several years. Systems like Codex have made coding easier, but they cannot match the many skills of experienced programmers. And when people misuse them, they can complicate software design, slow it down or even wreak havoc across the internet. These tools have made software design so easy, they have caught the attention of people who have no experience with computer programming. In January, Anthropic's code generator, Claude Code, went viral as lawyers, photographers and school principals used English prompts to build apps that helped organize their laundry or send emergency texts. But a personal laundry app is very different from the enormously complex software that drives businesses and governments. Software that sends emergency texts is far simpler than internet applications, like Google Docs, LinkedIn and Uber, that serve billions of people across the globe. Building these applications requires the planning, guidance and experience of coders like Mr. Metzger. The most complex applications cannot be built without the enormous teams -- and vast technical resources -- available only to large software companies. Most experts believe that code generators will replace today's junior programmers. Using these tools, they say, feels like delegating tasks to someone who is still learning the trade. But these experts are divided on whether these tools will significantly harm the overall market for coders. Some, including Mr. Metzger, argue that code generators will expand the job market as programmers and software companies use them to build increasingly complex and powerful applications. "If you are a skilled programmer, there will be more work for you, and you will find more exciting things to do," said Grady Booch, former chief scientist for software engineering at IBM Research who is regarded as a historian of the field. In late January, a team of computer scientists at Carnegie Mellon University published a study examining the use of A.I. code generators by experienced programmers over several months. This followed a similar study they published in November. Both studies found that while code generators could speed up software development in the short term, they could also degrade the quality of the code, which typically slows projects in the long term. "There were significant speedups in terms of the amount of code that is produced," said Bogdan Vasilescu, a computer science professor who helped lead these studies. "But that came at a cost." Computer programmers call this "technical debt." And that includes security holes that can open software applications to attack, allowing hackers to lift personal data stored and processed by these apps. Last month, a technologist in Southern California, Matt Schlicht, launched a social network for A.I. agents called Moltbook. For tech enthusiasts, it showed the power of code generators like Anthropic's Claude Code and OpenAI's Codex. (The New York Times sued OpenAI and Microsoft in 2023, accusing them of copyright infringement of news content related to A.I. systems. The two companies have denied those claims.) As Mr. Schlicht showed, code generators are not limited to building software. They can serve as "A.I. agents" -- personal digital assistants that can complete tasks using existing software apps, including spreadsheets, online calendars and email services. That is why many people argue that A.I. will soon replace more than just low-level coders. Mr. Schlicht built Moltbook with help from one of these A.I. bots. And his social network was open only to this new kind of bot. Within days, thousands of bots were chatting with one another about everything from cryptocurrency to the nature of consciousness. But security experts soon discovered that a gaping security hole had exposed private information of the thousands of people who were running these bots on their personal machines. Moltbook did not just demonstrate the power of A.I. It served as a cautionary tale for how it can go wrong. "You have to go back and look at the thing you built with A.I.," said Will Wilson, the chief executive of Antithesis, a company that tests computer code for bugs and security holes. For many A.I. researchers and computer programmers, the noticeable flaws in the technology are only temporary. A.I. has steadily improved over the last several years, they argue, and it will continue to improve at a rapid rate. They flatly dismiss studies like those from Carnegie Mellon because they did not look at the systems released by Anthropic and OpenAI as recently as this month. They argue that the newest systems no longer put a drag on software development and that, as the months pass, the technology will handle more and more of the tasks that human engineers handle. As many people predict that A.I. will replace computer coders and traditional software companies, they also acknowledge that forecasting the future is a tricky business. "Today, if you are building a significant piece of software and you don't understand what the A.I. is doing, you are going to get yourself in trouble very, very quickly," Mr. Metzger said. "Will this still be the case in three years? In five years? I don't know."
[2]
Developer's Honest Assessment of AI at Work Rattles the Official Narrative
A veteran programmer shared his brutally honest opinions about AI's role in the workplace, and it's as much an indictment of the tech as it is of the organizations lazily deploying it. In an X rant that's being praised in online programming circles, the programmer, Dax Raad, said that what's holding back software companies isn't the speed they're able to churn out code, but the quality of their ideas -- an issue AI isn't going to solve, despite the industry's fixation on emphasizing its supposed ability to supercharge productivity. "Your org rarely has good ideas. Ideas being expensive to implement was actually helping," wrote Raad, whose own company OpenAuth sells AI tools. And workers aren't using AI to be ten times more effective, he continued; instead, "they're using it to churn out their tasks with less energy spend." Worse yet, the "two people on your team that actually tried are now flattened by the slop code everyone is producing, they will quit soon." "Even when you produce work faster you're still bottlenecked by bureaucracy and the dozen other realities of shipping something real," Raad concluded. There's some research that backs up Raad's scathing assessment. An ongoing study reported in Harvard Business Review which monitored two hundred employees at a US tech company found that AI was actually intensifying the workers' jobs, instead of reducing their workloads. Using AI to accelerate tasks, it turned out, was a double-edged sword, because it led to "workload creep," forming a vicious cycle in which AI raised expectations for how fast the workers had to churn stuff out, which in turn made them more reliant on AI to keep up with the greater demands. The upshot: worker fatigue, burnout, and lower quality work; not the hallmarks of a thriving organization. Another study documented how AI led to employees passing off low quality "workslop" that masqueraded as good work but in reality required someone else downstream to fix it. On top of slowing everything down, it bred resentment among coworkers, with some admitting that receiving workslop from a colleague lowered their opinion of them. As Raad makes clear, AI is not a cure-all. And even if AI does increase productivity, that productivity can be a mirage. How often are AI models producing shoddy code? And what if that shoddy code goes unnoticed? Maybe, as Raad suggests, ideas being "expensive to implement" was a good thing, because it forced engineers to think about a problem creatively. Not every impulse should be entertained. What's a thousand ideas that were dashed off with an AI instead of a few promising ones that are honed and given time and attention? The former may seem more productive, when it's really a collection of dead ends. Moreover, having employees become dependent on AI hardly seems conducive to rewarding and fostering creativity. As numerous experts have warned, it's another form of cognitive offloading, in which crucial functions of our brain, including critical thinking, are outsourced to a piece of technology. This isn't the line being peddled by tech companies, however. Nvidia CEO Jensen Huang reportedly told his workers they'd be "insane" not to use AI to complete every possible task. Microsoft's AI CEO Mustafa Suleyman claims AI's already so effective that virtually all white collar tasks will be automated within a year and a half. And Microsoft and Google both brag that over a quarter of their code is now AI-generated. But however useful these AI tools may or may not be, they can't work miracles. At the end of the day, it comes down to humans to run a tight ship. "Even when you produce work faster" with AI, Raad said, "you're still bottlenecked by bureaucracy and the dozen other realities of shipping something real."
[3]
The CEO of a $1 billion AI unicorn says his peers in Silicon Valley want you to fear for your job, but they're actually first on the chopping block | Fortune
Silicon Valley's artificial intelligence (AI) boom has sparked widespread panic about the future of human labor, a moment summed up by AI executive Matt Shumer's viral essay likening this moment in white-collar work to February 2020, before the pandemic devastated American life. Shumer warned that white-collar workers have to figure out plan B right now, because a Covid-like extinction event is coming for white-collar work. Almost simultaneously, Microsoft's AI chief Mustafa Suleyman gave it 18 months before anyone looking at a computer for a living will be out of work within that timeframe. This was a revival of sorts for the sort of doomsday predictions that marked the first half of 2025 before going ominously silent. Anthropic's Dario Amodei, for instance, predicted that AI would eliminate half of all entry-level white-collar jobs, while Ford CEO Jim Farley said it would wipe out half of white-collar jobs, full-stop. Tanmai Gopal says these dire predictions are a classic case of Silicon Valley self-projection, even narcissism. The co-founder and CEO of PromptQL, a $1 billion-plus Bay Area unicorn that helps companies with AI adoption, told Fortune in a recent interview that the AI doomsday predictions definitely contain a grain of truth while also being massively overstated. "That's 100% what's happening where you have a bunch of ... people who are in the hype cycle." Gopal said his community in the valley is "feeling the awesomeness of this AI" but "we're projecting that into domains that we don't actually understand." "It's like, oh, this is the problem for 7 billion people on the planet, because I'm in Silicon Valley, so I obviously know what's best, right?" Gopal also noted that cynics have a point, with these doomsday predictions occurring right around the time of the next funding multibillion-dollar funding round for many AI start-ups that have yet to go public, offering a clear funding rationale that may not bear out. In general, he added, "Tech people... think like, this affects me. So it's going to affect everyone like that." Actually, Gopal said, that's just not the case. But when it comes to coders, even the senior software engineers, who are exposed to the "awesomeness" of the AI tools now available, he said those people are facing a paradigm shift. Gopal was speaking to Fortune weeks after the "SaaSpocalypse" wiped out $2 trillion in software-as-a-service valuations, with investors realizing, as Bank of America Research recently put it, that AI is a "double-edged sword" and not purely an upside play. It could very easily "cannibalize" many businesses, BofA said, such as software that AI is advanced enough to write itself. Economists have been puzzling over very noisy data over the last year or so, with the U.S. economy largely flatlining in job production while also facing elevated tariff costs and far fewer immigrants entering the workforce. Some AI thought leaders, notably Stanford's Erik Brynjolfsson, looked closely at the data and saw productivity really starting to lift off in 2025. Writing in the Financial Times op-ed, Brynjolfsson noted the latest jobs report revised all job gains for 2025 down to just 181,000, while his own calculation projected productivity of 2.7% for the year, versus the 1.4% average over the past decade. Of course, this lends weight to the AI displacement theory, with even Federal Reserve Governor Michael Barr recently warning that millions could be "essentially unemployable" in the near future. Gopal said it's true that the tech industry has inadvertently automated itself, reaching the era of "baby AGI" (Artificial General Intelligence) specifically for coding. The latest AI models have the judgment and taste of an "average senior software engineer," Gopal said, explaining that standard software engineering heavily relies on converting established business context into technical code and because AI excels at this translation, coding has become the first major domino to fall. "What used to be kind of sometimes considered the epitome... of white collar was like high-grade software engineering," Gopal noted. "That's been all the rage for the last 30 years and I'm excited to see that go." He explained that his excitement stems from the robotic nature of the jobs that robots are already starting to perform and what he's seeing on the frontlines of his company, which helps Fortune 500 companies actually build AI tools and agents that are specialized to their business. "What we've been doing over the last year is ... we've been working exactly at that intersection," Gopal said, and for the most part, he's found that "AI is not useful" because it needs so much business context to be effective. "People keep thinking it's a technical problem," but it's really about the difficult fact that AI can't access business context that lives inside people's heads and hasn't been translated to data -- and may never be. "People are thinking, 'Oh, it's like a semantic layer and a data problem and get your data ready and make it work and whatnot," but the real issue is that data doesn't exist for the most useful information that the AI needs. "Nobody wrote that down. And if nobody wrote that down, you can't train AI on it." Paradoxically for an AI executive, Gopal said that arguably, many businesses exist that AI can never be trained on, "because this is real-life business that moves." Real people who have conversations and continually update a business context will always be one step ahead of the machines, he explained. "Are you going to retrain for that one individual conversation for one day?" he asked, and then retrain on a rolling basis every time your business context changes? Gopal agreed with his interviewer that journalism was an example of a profession that could resist automation, because readers are interested in human insight, deep sourcing and forward-looking analysis, things that AI can't easily reproduce, if ever. He also mentioned salespeople, marketers and operations staff as examples. People in the field who have to make real-time decisions are inherently protected, in his view. Gopal isn't the only executive who recognizes that AI requires human deployment to function. Tatyana Mamut, a former Salesforce and Amazon Web Services executive who now offers AI agent-monitoring purposes through her startup Wayfound.AI, told Fortune that "we need to stop talking about AI like tools. It is not a tool, right? It's not like a hammer." Rather, she argued, it's more like a hammer "that thinks for itself, can design a house, can build a house better than most people who work in the construction industry can build a house." It still needs to be shown the construction plans, though. Regarding business context, Mamut said she thinks "very few" people really understand how to make this work with AI. "You need like real tools and mechanisms to capture that contextual learning." Companies with different brands, different systems and different processes all have different context that need to be captured by AI, she said, predicting that the smart SaaS companies will pivot into this territory. Instead of software-as-a-service, she said expert services will be delivered via agents with proper context capture. Gopal was bearish about how much this context can be captured, estimating that 70% of the effort required to make AI useful relies entirely on unwritten business context that exists only in human heads. "You fundamentally cannot train a system" on this fluid daily reality, Gopal explained, noting that real-life business constantly changes based on individual conversations and human interactions. While AI can automate tasks at the absolute top (coding) and the absolute bottom (physical robotics), the vast middle ground of knowledge work requires human context. Ed Meyercord has been deploying machine learning processes for over a decade at Extreme Networks, a networking company that powers pro football and baseball stadiums and draws in over $1 billion in revenue. He told Fortune in a recent interview that he sees dynamics similar to Gopal's on the operator's side of the table. His teams already use agents to design networks, spot failures before they happen, and even communicate with other agents in systems like ServiceNow, but he is adamant that there is always a human in the loop to review the work when the stakes are critical infrastructure. "A network is critical infrastructure, so we have to be right," Meyercord said. Extreme has built an agentic core into its platform, he added, "but effectively what that's allowed us to do is to be highly, highly accurate." Because accuracy is so paramount, he said, "we always want to have a human in the loop, show all the work that we're doing." Like Gopal, Meyercord said he doesn't believe AI can simply "take our jobs" outright; the role of the human is shifting from doing every task manually to orchestrating agents, gathering the right context, and deciding which problems to point the machines at. He said his job as CEO is, in many ways, to surround himself with specialists "a lot smarter than I am" while using AI as another hyper‑fast teammate rather than a replacement. On the other hand, anything that can be automated is already vulnerable to AI, Gopal said, nodding to the "SaaSpocalypse" in markets that is brutally punishing software-as-a-service stocks, insurance, wealth management and customer service. By the end of the year, he said, this will be even more visible in company valuations, as robots hoover up the work of anything that doesn't require business context. The exciting thing, he added, is what this means for work. This symbiotic relationship between the human worker, who has a business context, and the AI, which can work faster and even smarter but lacks the input, will define the future of white-collar work that Shumer has warned about, according to Gopal. "You have to pick and choose the context and you have to keep capturing the context, right? And I think that's really what the shift is for the average white-collar worker is that they have to understand." Gopal related an anecdote from his team, expressing frustration with a mediocre software engineer now that they have AI coding tools. "We're like, 'Man, like, it's just more expensive to talk to you than it is to do it myself. Like, to explain what I need built on the product takes more time than me just slamming it out of AI on the side.'" The time it takes to talk to a mediocre engineer could be spent managing an AI output instead, he added. He likened this to every employee having a personal technical co-founder by their side at all times, potentially enabling them to produce 20 times as much work. Meyercord agreed, saying that computer-science graduates don't need the same skillset as before, but they will "need a different skillset." He said he's already starting to see new skillsets develop, not necessarily all liberal arts graduates who are deeply trained in critical thinking, but more a sense of "people that are helping us develop." He needs people who can delegate work to AI agents, talk with agents, vet their work, and oversee workflows. It sounds a lot like what Gopal predicted. The job of the human has to evolve to feed the proper inputs to the AI agents that will power the business, Gopal predicted, and he put a name on it. "Our job as humans and people is that we are now context gatherers instead of just workers." Most people have taken this for granted up until now, he said, because they didn't have AI agents to work alongside. "What makes us good at our job, and what gives us promotions, and what makes us more impactful is actually that ability to gather context. That's what makes us good." The only people who genuinely need to fear for their jobs, Gopal warned, are those who are "refusing to grow" and deny this new reality. If everyday workers fail to adopt these tools, they risk handing all economic power to a select few who do understand the technology, potentially creating a dystopian wealth gap. But for those willing to adapt, the future is incredibly bright. "I don't think AI will just come and take our jobs," Gopal said. "That's not even kind of possible". Meyercord said his business is still growing, and he argued that the AI job-loss narrative misses the forest for the trees. "On the one hand, you can do a lot more with less," he said, "or you could do more with the same [number of workers]. Or you could do a lot more with a little more, right?" If you hire the right context gatherers, Meyercord added, you can really grow your business. "It's like, how do you think about what you want to try to accomplish? We want to do a lot more."
[4]
No one can agree on whether AI is the next big thing or all hype. Here's why
AI is either your most helpful coworker, a glorified search engine or vastly overrated depending on who you ask. And no one seems to agree on which is right. Tech executives championing AI have long spun the narrative that the tech will revolutionize jobs and bring about a new industrial revolution. Skeptics think it's all marketing hype, while some researchers and executives are sounding the alarm about safety concerns on their way out the door. The discrepancy in how people view AI has perhaps never been so apparent as this past week, after a viral essay from an AI CEO and investor claimed the tech is coming for any job that involves sitting in front of a computer. But there may be a simpler explanation as to why people have taken such divergent stands: People use different types of AI in different ways, yet it's all being referred to in the same way. "There's just a wide spectrum of how much people have been exposed to the technology, how much they've used the technology," said Matt Murphy, a partner at Menlo Ventures who has led investments in AI companies including Anthropic. "And that's also changing pretty rapidly." People who use free AI for basic tasks like making grocery lists and planning vacations are likely only seeing one side of the technology. A report from Menlo Ventures published last June estimated that only 3% of AI users are paid subscribers, although Murphy told CNN he expects that to change quickly. But those who pay get access to another feature: Agents that can handle some work for you rather than just chatbots that craft responses, plus fewer limits on usage. Anthropic's Claude Cowork agent, for example, is only available in the $20-per-month Pro plan and higher. The case is similar for OpenAI's Codex coding agent. It's that type of AI that's fueling concerns about AI's impact on jobs, including the controversial argument Matt Shumer, an investor and former CEO of AI startup, gets at in his viral essay. "I'll tell the AI: 'I want to build this app. Here's what it should do, here's roughly what it should look like. Figure out the user flow, the design, all of it.' And it does. It writes tens of thousands of lines of code," Shumer wrote. He went on to claim that the AI was able to test the app and make decisions concerning taste and judgment. And he surmised that if AI could write code that well, it could begin improving upon itself, as well. (AI researchers accused Shumer of exaggerating the performance of his AI model in 2024. He apologized at the time and told CNN it was the "biggest mistake" of his "professional life" and that he learned through the process). Some experts are skeptical the use cases Shumer outlined are possible even with paid plans, especially since he was vague about which model he used and what type of app the AI built for him. Shumer told CNN he primarily uses OpenAI's GPT-5.3 Codex tool and that he was working on a "medium to high complexity app" for testing purposes. Still, the free version of AI apps doesn't paint the full picture of what the technology is capable of, according to Emily DeJeu, a professor who teaches courses about the use of AI in business at Carnegie Mellon University. She said it would be "misguided" to make assumptions about AI's capabilities based solely on free AI services. Oren Etzioni, professor emeritus at the University of Washington and previous CEO of the Allen Institute for Artificial Intelligence, described the gap between the free and paid tiers of AI like comparing an eager yet inexperienced intern with a seasoned, hard-working intern. Free AI tiers are good at writing summaries and generating content, but users will typically have to pay to conduct deep research or draft sophisticated documents using AI. While free AI can "give you surprisingly good advice" and "engage with you in a surprisingly sophisticated dialog, you wouldn't want to use one of those as your attorney or even as your paralegal," he said. But AI companies are increasingly trickling more advanced features down to the free tier, which is part of why James Landay, cofounder of the Stanford Institute for Human Centered AI, said he doesn't see a big difference between free and paywalled AI. Case in point: Anthropic launched a new model called Sonnet 4.6 on Tuesday that it says will bring performance closer to its more advanced Opus models only available in its paid plans. Software stocks plummeted in early February after AI company Anthropic released a tool tailoring its AI helper specifically for individual industries, like legal and financial analysis. That launch, followed by Shumer's essay, stoked concerns that AI will eventually broadly automate knowledge work the way it's starting to streamline software engineering jobs. Yet there's also growing skepticism about whether AI is living up to these lofty declarations, often made by tech executives with financial interests in the technology's success. Some studies have poured cold water over how capable AI truly is and how quickly it's being adopted. A group of researchers from the Center for AI Safety and Scale AI found last year that leading AI models produced flawed results when tasked with work assignments like visualizing data and coding video games. An organization that tests AI models called Model Evaluation and Threat Research found in July that developers take 19% longer to work on their code when using AI, although that research was based on tools from early 2025. Landay also says the role AI is playing in software development is overstated in the essay. AI is a helpful tool that programmers use to speed up development, but it's still prone to mistakes, and AI models aren't writing themselves. While experts have widely agreed AI will change many industries, AI's proficiency at coding shouldn't be taken as a sign that it'll perform the same way in other professions. "(Coding is) also a logical structure, which is a really good fit for a machine to also be able to test the code and see if it works," he said. "Many people's jobs are not structured in that way."
Share
Share
Copy Link
Artificial intelligence tools like OpenAI's Codex and Anthropic's Claude Code are accelerating software development, cutting project timelines from months to days. But experienced programmers warn that AI code generators produce slop code requiring extensive oversight, create technical debt, and fuel workload creep. While junior programmers face displacement, experts remain divided on whether these tools will expand or contract the overall job market for coders.
Artificial intelligence is reshaping how programmers build software, with AI code generators demonstrating capabilities that seemed impossible just months ago. When Perry Metzger, a veteran programmer since the 1970s, tested OpenAI's Codex in January, he and his partner built an online word processor comparable to Google Docs in just two days—a project that would have required at least two months without AI assistance
1
. "I used to do the coding and they would help me do the work," Metzger explained. "Now, I supervise them as they do the work."1

Source: NYT
This acceleration in software development has caught the attention of Wall Street and sparked widespread concern about coding jobs. In early February, investors triggered a sell-off predicting that AI would undermine traditional software companies, while tech executives made bold claims about automation. Microsoft's AI chief Mustafa Suleyman projected that virtually all white-collar tasks would be automated within 18 months, while Microsoft and Google reported that over a quarter of their code is now AI-generated
2
. Matt Shumer, an AI investor, wrote a viral essay comparing this moment to February 2020, warning that white-collar workers face a pandemic-like extinction event3
.
Source: Futurism
But the reality of AI's impact on white-collar jobs proves far more nuanced than the hype suggests. Carnegie Mellon University computer scientists published studies in November and January examining how experienced programmers use AI code generators over several months. Both studies found that while these tools speed up initial development, they degrade code quality, creating what programmers call technical debt
1
. "There were significant speedups in terms of the amount of code that is produced," said Professor Bogdan Vasilescu. "But that came at a cost."1
This technical debt includes security flaws that expose applications to hacking and data theft. Dax Raad, a programmer whose company OpenAuth sells AI tools, delivered a scathing assessment of how organizations deploy these technologies. "Your org rarely has good ideas. Ideas being expensive to implement was actually helping," Raad wrote, arguing that workers use AI "to churn out their tasks with less energy spend" rather than becoming more effective
2
. The result is slop code that masquerades as quality work but requires someone downstream to fix it, breeding resentment among coworkers and slowing projects2
.Research documented in Harvard Business Review reveals another unexpected consequence: workload creep. A study monitoring 200 employees at a U.S. tech company found that AI intensified jobs rather than reducing workloads
2
. Using AI to accelerate tasks created a vicious cycle where AI raised expectations for output speed, making workers more reliant on AI to meet greater demands. The outcome included worker fatigue, burnout, and lower quality work—hardly the productivity revolution promised by tech executives2
.Tanmai Gopal, CEO of PromptQL, a $1 billion Bay Area unicorn helping companies with AI adoption, argues that Silicon Valley's doomsday predictions reflect narcissism and self-projection
3
. "Tech people think like, this affects me. So it's going to affect everyone like that," Gopal told Fortune3
. However, he acknowledges that software engineering jobs face a genuine paradigm shift, with the latest AI models possessing the judgment of an "average senior software engineer" specifically for converting business context into technical code3
.Related Stories
Despite advances in AI hype versus reality, building complex applications that serve billions of users requires human business context that AI cannot access. "AI is not useful" without extensive business context, Gopal explained, noting that critical knowledge lives inside people's heads and hasn't been translated to data
3
. While Anthropic's Claude Code went viral as non-programmers built personal laundry apps, these simple projects differ vastly from enterprise software powering businesses and governments1
.Source: BNN
Most experts agree that AI will replace junior programmers, with using these tools feeling like delegating to someone still learning the trade
1
. But they're divided on overall market impact. Grady Booch, former chief scientist for software engineering at IBM Research, argues that AI code generators will expand opportunities as programmers build increasingly complex applications. "If you are a skilled programmer, there will be more work for you, and you will find more exciting things to do," Booch said1
.The gap between free and paid AI services also shapes perceptions. Only 3% of AI users subscribe to paid plans, according to Menlo Ventures, yet paid tiers offer AI agents that handle work autonomously rather than just chatbots
4
. Anthropic's Claude Cowork agent and OpenAI's Codex are only available in plans costing $20-per-month or higher4
. This disparity explains why assessments of AI's capabilities vary so dramatically.As Raad observed, even when AI produces work faster, organizations remain "bottlenecked by bureaucracy and the dozen other realities of shipping something real"
2
. The automation of software development requires human critical thinking and oversight to prevent the proliferation of flawed code that creates more problems than it solves.Summarized by
Navi
26 Nov 2025•Business and Economy

10 Dec 2025•Business and Economy

31 Jul 2025•Technology

1
Policy and Regulation

2
Policy and Regulation

3
Business and Economy
