12 Sources
12 Sources
[1]
The first signs of burnout are coming from the people who embrace AI the most
The most seductive narrative in American work culture right now isn't that AI will take your job. It's that AI will save you from it. That's the version the industry has spent the last three years selling to millions of nervous people who are eager to buy it. Yes, some white-collar jobs will disappear. But for most other roles, the argument goes, AI is a force multiplier. You become a more capable, more indispensable lawyer, consultant, writer, coder, financial analyst -- and so on. The tools work for you, you work less hard, everybody wins. But a new study published in Harvard Business Review follows that premise to its actual conclusion, and what it finds there isn't a productivity revolution. It finds companies are at risk of becoming burnout machines. As part of what they describe as "in-progress research," the researchers spent eight months inside a 200-person tech company watching what happened when workers genuinely embraced AI. What they found across more than 40 "in-depth" interviews was that nobody was pressured at this company. Nobody was told to hit new targets. People just started doing more because the tools made more feel doable. But because they could do these things, work began bleeding into lunch breaks and late evenings. The employees' to-do lists expanded to fill every hour that AI freed up, and then kept going. As one engineer told them, "You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less. But then really, you don't work less. You just work the same amount or even more." Over on the tech industry forum Hacker News, one commenter had the same reaction, writing, "I feel this. Since my team has jumped into an AI everything working style, expectations have tripled, stress has tripled and actual productivity has only gone up by maybe 10%. It feels like leadership is putting immense pressure on everyone to prove their investment in AI is worth it and we all feel the pressure to try to show them it is while actually having to work longer hours to do so." It's fascinating and also alarming. The argument about AI and work has always stalled on the same question -- are the gains real? But too few have stopped to ask what happens when they are. The HBR study isn't entirely novel. A separate trial last summer found experienced developers using AI tools took 19% longer on tasks while believing they were 20% faster. Around the same time, a National Bureau of Economic Research study tracking AI adoption across thousands of workplaces found that productivity gains amounted to just 3% in time savings, with no significant impact on earnings or hours worked in any occupation. Both studies have gotten picked apart. This one may be harder to dismiss because it doesn't challenge the premise that AI can augment what employees can do on their own. It confirms it, then shows where all that augmentation actually leads, which is "fatigue, burnout, and a growing sense that work is harder to step away from, especially as organizational expectations for speed and responsiveness rise," according to the researchers. The industry bet that helping people do more would be the answer to everything. It may turn out to be the beginning of a different problem entirely.
[2]
Using AI at Work May Actually Make Your Days Longer and More Unpleasant, Study Finds
Every company seems determined to integrate AI, but the gains might not have a long shelf life. After an "initial productivity surge," employees who used AI reported more intense workdays and less work-life balance, and they produced lower-quality work overall, according to an ongoing study first published this week in the Harvard Business Review. Researchers from the University of California, Berkeley, studied the habits and behaviors of about 200 people using generative AI in their work at a technology company for eight months. The company offered employees enterprise-level subscriptions to AI products. The employees weren't required to use AI, but many workers did. What happened next is exactly what AI companies hope happens: Employees who used AI worked faster and took on more responsibilities. But there were unintended consequences that showed the limits of current AI tools being used in the workplace. One of the biggest selling points of AI in the workplace is that it can help employees handle tasks that might have otherwise been outside their expertise or skill set. Non-developers, for example, can now vibe code nearly any project. Employees in the study did this, taking on work that would have otherwise been delegated or avoided, the authors noted. So employees inadvertently created more work for themselves, putting more on their plates and struggled to balance it all. Read More: Is AI Putting Jobs at Risk? A Recent Survey Found an Important Distinction We also know AI as a work hack isn't without downsides. AI outputs are rarely ready to go without first being reviewed by a real human. A September 2025 study found that employees spend hours each week dealing with their colleagues' and their own low-quality or error-ridden AI work, sometimes called "workslop." A 2025 enterprise report from OpenAI said employees only saved an average of 40 to 60 minutes a week, with more time saved for power AI users. That time saved by AI might not have made a measurable difference in work-life balance. The employees in the UC Berkeley study actually ended up working longer hours. The always-available and easy-to-use nature of AI made it simple for them to run a query during their lunch break or ask a quick question after logging off. Even when employees had the sense of having a digital partner, their cognitive loads didn't necessarily decrease, and there were still expectations to deliver results quickly because they were using AI to help. This is why the UC Berkeley researchers said AI was more likely to "intensify" work rather than reduce it. Authors Aruna Ranganathan and Xingqi Maggie Ye offer solutions centered around culture and norms that companies can adopt to prevent AI-powered burnout. These include protecting time for human connection, prioritizing quality results over speed, and ensuring that employees have blocked focus time without AI interruption. Being intentional with AI usage -- both in and outside of work -- is one of the best ways to prevent misuse and create work that isn't sloppy. Across industries, workers have worried that advanced in AI will wipe out their jobs. Anthropic CEO Dario Amodei recently said AI could result in "unusually painful" short-term disruptions to the workforce. And the latest bouts of thousands of layoffs at Amazon were done explicitly because the company expected AI to fill in the gaps and help remaining employees do more with fewer resources. But we've seen ample evidence that while AI can help you do some tasks, it's unlikely to actually fulfill entire roles in most industries.
[3]
Using AI actually increases burnout despite productivity improvements, study shows -- data illustrates how AI made workers take on tasks they would have otherwise avoided or outsourced
Often, the pressure to succeed comes from the employees themselves, which could be half the problem. The burning question at the heart of the AI revolution in the workplace is ultimately: Is it worth it? Does productivity improve? Do costs come down? Is it remotely as progressive and transformative as the AI developers claim? A new study published in the Harvard Business Review suggests that although AI has the potential to marginally improve productivity, it also leads to workers taking on more pressure, resulting in more unforced errors and more frequent employee burnout. The source of this success and concern didn't come from the employers, either, but the employees themselves. This suggests that even in companies where AI use isn't mandated or even explicitly encouraged, employers may need to adopt AI codes of practice that protect their employees from their own tendencies. The study in this case saw researchers embed themselves in a U.S. technology company with around 200 employees for eight months. There, they observed employee practices, tracked internal communication channels like Slack, and conducted around 40 in-depth interviews. The results suggest greater nuance to the AI in the workplace debate about whether it's effective or not. By observing how it was used and the effect it had on employee confidence and stress levels, the study suggests that AI at work needs careful management, even when the results are positive. Unlike other studies, which have suggested few companies have benefitted from the effects of AI on their buisiness, or that using AI can actually increase the amount of time spent on a task, even if it felt like it was lessened, this study is quite unambiguous: the AI helped employees do more, with the caveat that the increased productivity often came with an "intensification," of the workplace. By having an AI on hand to answer simple questions and coach employees into tasks they weren't familiar with, many of them felt they could take on new tasks. "Product managers and designers began writing code; researchers took on engineering tasks; and individuals across the organization attempted work they would have outsourced, deferred, or avoided entirely in the past," researchers said. Employees felt tasks outside their usual wheelhouse were accessible and attainable because AI held their hand while they learned how to do it. This increased the scope of many employees' roles. But it also meant they were taking on tasks they really weren't qualified to do, performing work at times and places where they might otherwise be resting, and felt capable of multitasking in ways that meant they were juggling more varied tasks. The big fear for many workers is that AI will replace their role. While there is some real concern, it's likely overstated. Perhaps instead, workers should be concerned about their roles not being taken by AI, but by their colleagues who are using AI to expand their capabilities. The issue, however, is competence. Although the Harvard study suggests that workers did indeed step outside their lane to produce more, especially where they would typically have relied on a colleague or subordinate for a task, it often meant producing lackluster work. This was particularly prevalent where workers without programming experience were vibe-coding solutions to solve problems, only to informally request engineers' help to finish partially completed pull requests. Although workers often cited their AI use as "just trying things," the over-confident expansion of their own responsibilities meant that they often required additional help to do a job that would have been better off being handed off to a professional, or for headcount to be increased to handle the additional responsibility. Prompts are quick and easy assemble for many, but the responses often take time to read through or action. That meant workers who used AI a lot would often find ways to slip in an extra prompt just before they went off to lunch, or while working on other tasks. This additionnal multi-tasking made workers feel busier and more productive. They were able to handle more tasks at once because the AI was often doing something in the background. But that added to workers' mental load. Workers were forced to jump between multiple tasks, playing havoc with their attention span. It demanded frequent checking of AI outputs, and that sometimes led to tasks lying incomplete for longer as they joined the carousel of ongoing tasks that were not as straightforward as they might have been, had they been handled by a professional whose sole role is that kind of job. Downtime was impacted too. Workers would throw out prompts while they ate their lunch or while waiting for the coffee maker to brew. This conversational style blurred the lines between what was work and what was socializing, meaning that over time, workers often filled what should be rest time with more work. Playing into the 24/7 access culture for many workers, employees in this study often found that they struggled to relax or feel as rejuvenated by time away from work, because the pressure of one more easy prompt was always there. Although the study doesn't take this speculative leap, I wonder if AI use and its instant gratification of conversational-style responses mean that workers are using it a little like social media. They chase the dopamine hit of a response and the added satisfaction of completing a task, even if that means taking on more cognitive load than they can comfortably handle. Someone should do a study on whether AI response times have been designed to be addictive, as well as give the tool enough time to produce an effective response. The study's hypothesis that AI work begets more work proved accurate. Workers were faster and more productive with certain tasks, meaning they often took on more to fill that time, often relying on AI to do so. That reliance introduced additional work for them and their colleagues, and spread those tasks across a broader timeframe, meaning some work took longer, introduced more errors, and required additional time, effort, and mental energy. The effect was workers who felt like they were getting a lot more done were more often getting a little more done, and ultimately found themselves burning out from the whole process. And that's within a company that didn't force AI on its employees. While employers might see workers pulling themselves up by their AI bootstraps as at least partially positive, the long-term negative effects could be dire. Errors may mount, experienced workers may leave, and businesses may find it increasingly harder to spot what's actually beneficial and what's just AI-induced busy work. To combat this, researchers suggest employers should adopt AI codes of practice, whether they encourage AI use or not. This should involve dedicated pauses that force workers to reflect on how a task is being managed and if it could be handled differently. This should go hand in hand with focusing on a single or limited number of tasks at one time. To help maintain attention span and avoid constant work creep, prompts could be sent out in batches, and their responses actioned within a specific timing window. Instead of constantly receiving notifications, workers could be sent a report on AI activity and responses once an hour, instead of at the rate of conversation. The authors also emphasised the importance of human connection. Lunch breaks, water-cooler chats, employer check-ins and wellbeing meetings. These spaces should be protected from AI creeping into that time. "By institutionalizing time and space for listening and dialogue, organizations re-anchor work in social context and help counter the depleting, individualizing effects of fast, AI-mediated work," the study concluded.
[4]
Researchers Studied Work Habits in a Heavily AI-Pilled Workplace. They Sound Hellish
One could be forgiven for thinking that automation tools would make arduous tasks redundant, and make work more relaxing overall. But this elides an important law of the universe: the ratchet of productivity only turns one way. That is, it's a modern day truism that if automationâ€"AI or otherwiseâ€"makes any sort of positive change in your work life, you’ll feel a sort of squeezing sensation, and additional work will materialize to erase any momentary feelings of relief. According to a case study highlighted in some “in-progress research†from Aruna Ranganathan, who teaches management at UC-Berkeley and Xingqi Maggie Ye, a Ph.D. student who is part of Ranganathan’s Berkeley program, AI “intensifies†work, and certainly doesn’t make people’s days easier. It sounds, in other words, like hell on earth. If that is, paradoxically, what you want in your workday, then you probably work in a place like Silicon Valley, or even at OpenAI, where CEO Sam Altman has described AI’s ability to intensify his own work in ways that make him sound strangely awed and humbled (even as he expresses little to no regret about his ambition to annihilate knowledge worker jobs). “I don't think I can come up with ideas fast enough anymore,†he said in an interview in October of last year, adding “I think it will mean that stuff just happens faster and that you canâ€| that you can try a lot more stuff, and figure out the better ideas quickly.â€Â  Altman's experience may resonate with the workers mentioned in the article about Ranganathan and Ye’s research for Harvard Business Review. They describe an eight-month study into generative AI’s effects on working life at a company with about 200 employees. Employees “worked at a faster pace,†the authors write, covered a “broader scope of tasks,†and found themselves working “more hours of the day, often without being asked to do so.†This was a workplace that, Ranganathan and Ye explain, didn’t mandate AI use. It just made enterprise AI tools available. This doesn’t sound like a 200-person workplace where widgets were being glued together. Instead, many of the roles described in the article involve engineering, writing code, and communicating in Slack, so it’s safe to say these were knowledge workers and software engineers, quite possibly making use of tools like Claude Code. Due to AI, many of Ranganathan and Ye’s subjects, it seems, started expanding the scope of their jobs, usurping one another’s roles, and taking on roles coaching others on coding, or correcting their vibe-coded work. Hiring new employees may have been postponed or circumvented altogether, because employees “absorbed work that might previously have justified additional help or headcount.†Workers also, it seems, furtively fed tasks into their AI tools while they were ostensibly in meetings, and submitted prompts while on breaks, while waiting for things to load, or while they were supposed to be having lunch. How you interpret this case study is going to vary. If your workplace is a startup in “founder mode†and everyone in your office is working punishing hours in exchange for equity in a company that everyone hopes will be a unicorn, I’m guessing you’ll probably love the sound of thisâ€"particularly if you’re a CEO/founder and you’re planning to become a billionaire. That’s far from a universal experience, however. According to a 2024 Pew survey, about half of U.S. workers reported that they were either somewhat satisfied or “not too/not at all satisfied,†and the other half said they were “extremely/very satisfied.†That “extremely/very satisfied†group shrinks from 50% to 42% when the respondent has a lower income. That survey also found that far and away the most satisfying aspects of a job according to respondents are other humans, with 64 percent reporting being “extremely/very satisfied†with their relationships with their co-workers. Skills development, meanwhile, ranked low, with 37 percent reporting being “extremely/very satisfied†with that aspect of a given job. So I don't get the impression that fewer people, having to learn to do more things, and work that seeps into breaks will help most people's job satisfaction, but maybe I lack a certain kind of vision. In other words, if instead of building an app, you’re someone who works as, say, a hospital receptionist or a school administrator, you’re probably not all that stoked about a hypothetical where hiring is postponed, you have to do other people’s jobs, you’ll work on your breaks, and instead of getting new, helpful software, you’re getting enterprise AI tools so you can make your own software. But let's not assume that all tech workers love this kind of productivity theater, or that the sense of greater productivity in Ranganathan and Ye's case study is necessarily anything other than an illusion. An anonymous worker at the cybersecurity firm Crowdstrike wrote into the newsletter Blood in the Machine last year, and said workers at that company “have been encouraged to handle the additional per capita workload by simply working harder and sometimes working longer for no additional compensation,†and that “While our Machine Learning systems continue to perform with excellence, I have yet to be convinced that our usage of genAI has been productive in the context of the proofreading, troubleshooting, and general babysitting it requires.†According to this person, “The net result is not a lightening of the load as has been so often promised,†and “Morale is at an all-time low.â€
[5]
'We have to learn to embrace the imperfect nature of human solutions...' -- what we lose when AI starts doing all our thinking at work
The hidden mental cost of letting AI do too much of our thinking The AI work dream we were sold went like this: use AI and work gets easier, days feel calmer, and your mind is finally free to focus on what matters. More interesting tasks, more creative thinking, more energy left for life outside work. But that promise is already starting to crack. A 2025 MIT report suggests that around 95% of generative AI pilots inside companies are failing to deliver on their promises. Other research, including work from the Stanford Social Media Lab in collaboration with research team BetterUp Lab, suggests AI tools can end up creating more work, not less. Much of the conversation has focused on what this means for businesses and the bottom line. But as evidence also grows linking heavy AI use to weaker critical thinking and learning skills, a more urgent question emerges: what is using AI at work doing to our minds? To explore that, I spoke to Ellen Scott, journalist, Digital Editor of Stylist and author of Working on Purpose. She's spent years thinking deeply about modern work, its demands, contradictions, and emotional toll. Scott describes the effect AI is having on our working lives as "smoothout": the gradual erosion of friction, challenge, and agency in the name of ease. I wanted to understand what smoothout looks like in practice, why it's so common, and how we might resist it. "Smoothout is a term I coined to describe a specific type of burnout that comes from overusing AI," Scott explains. While it isn't a formal medical diagnosis, she uses it to describe a pattern she's increasingly seeing emerge around AI use at work, particularly as employers push wider adoption of generative tools. Scott describes smoothout as "a cousin of burnout", because the symptoms often overlap: tiredness, low mood, stress, fatigue, a loss of motivation. It also shares similarities with 'boreout', the mental health impact of being consistently understimulated at work. But the key difference is the trigger. With smoothout, the cause isn't overwork or boredom, but reliance on AI tools in place of challenge. "When we don't have sufficient challenges, or the opportunity for the mental-health-boosting experience of mastery, our sense of accomplishment drops," Scott explains. "We become disengaged, and negative stress symptoms are triggered." She says that the problem is how quickly we now turn to AI at the first hint of difficulty - especially at work. In doing so, we might lose something important. "We rid ourselves of the opportunity for challenge and the healthy form of stress associated with that, known as eustress," she explains. "Smoothout happens when work and life become too smooth. Our brains crave friction." There's a solid body of research that suggests our brains thrive on the right amount of challenge. Psychologist Mihaly Csikszentmihalyi, the author of Flow: the psychology of optimal experience, showed that people are most engaged and fulfilled when the challenge of a task matches their skill level. So, too easy and we disengage, too hard and we shut down. Educational theory echoes this. Psychologist Lev Vygotsky's work on the Zone of Proximal Development shows that learning happens best when we're stretched just beyond what we can do without help. Cognitive research on "desirable difficulties" reaches a similar conclusion - tasks that require effort lead to deeper learning than frictionless shortcuts. One way to understand what's happening with AI is through the idea of cognitive offloading, which is using tools or external systems to reduce mental effort. We've always done this, think writing notes, setting reminders and using calculators. Offloading can be helpful because it frees up cognitive resources and allows us to focus elsewhere. But AI isn't just another notepad or calculator. It can think with us, or even instead of us, across a wide range of tasks. That's why early research suggests the cognitive offloading that happens when we use AI may be fundamentally different, with potential consequences for learning, memory, and skill development. The answer isn't to stop using AI entirely, especially if it's part of your job, but to think more carefully about when you reach for it. "I'm not anti-AI in all cases," Scott explains. "But it's the over-reliance on it that isn't good for our minds." She says the key is whether AI is replacing the parts of work that challenge us, rather than supporting us through the parts that drain us. "AI should be used to do the parts of work that aren't beneficial for mental or physical wellbeing," she says, like the monotonous, administrative tasks that add little meaning. Not the "meaty" parts of work that create challenge, engagement, and a sense of fulfilment. Which tasks fall into each category will differ from person to person. As a writer and editor, Scott values the challenge of writing an introduction to an article, but doesn't need the mental stretch of processing invoices. Using AI for the latter makes sense. Using it for the former does not. "I care about writing. It's the part of my job I most value, and my writing skills are something I always want to hone and develop. If I outsource that challenge, I deny myself the opportunity to do that," Scott says. Her advice is to ask yourself some simple but uncomfortable questions: which tasks distract from the point of your job without offering challenge or meaning? You could use AI for those. And which ones actually matter to you? Resist the temptation to use AI, even when it feels uncomfortable. Learning to notice, acknowledge, and accept discomfort is a big part of this process. "Even if it's difficult, we need to push through and do certain things ourselves," Scott says. "We have to learn to embrace the imperfect nature of human solutions, rather than defaulting to the smoother versions produced by AI." If the line isn't clear, she also suggests paying attention to how you feel. Not just while you're working, but afterwards. Are you engaged, depleted, restless, detached? Scott says that journaling can help people spot patterns between how they use AI and how their work affects them emotionally. Underlying all of this is a bigger point: work does give many of us a sense of purpose. While plenty of people feel burned out or disconnected from their jobs, the idea that AI will remove work altogether and make us happier by default is likely misguided. "In my book, Working On Purpose, there's a section about why work is good for us and why I see fulfilling work as a human right," Scott says. "I believe that bosses have a moral responsibility to ensure that the work they pay people to do is enjoyable, interesting, and fulfilling." And sometimes, smoothout might be a bigger signal you need to pay attention to. "If the bulk of your work can be done by AI with no real harm, that may be a sign you need a new challenge," Scott says. Work shouldn't be an impossible slog, but it shouldn't be a frictionless slide either. The task now is to resist defaulting to AI at every point of difficulty and instead find the level of challenge that keeps you engaged, learning, and feeling mentally well.
[6]
AI boosts productivity for workers but could hurt them long-term, study finds
The big picture: Companies may need to institute guardrails on what and for how long employees can use AI to boost their tasks to protect those productivity gains and avoid overwork, researchers wrote in Harvard Business Review. Driving the news: The researchers wrote this week that AI increased productivity for employees, rather than reducing work for them to do, in a recent eight-month study. * Their analysis of a U.S.-based technology company with about 200 workers found that "employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day, often without being asked to do so." * The company did not mandate AI use, they said. Zoom in: The researchers found that AI increased productivity in three key ways. * Task expansion: Workers increasingly stepped into responsibilities that belonged to others, such as product designers writing code. * Blurring boundaries between work and non-work: "Many prompted AI during lunch, in meetings, or while waiting for a file to load." * More multitasking: Employees performed more tasks simultaneously because "they felt they had a 'partner.'" Friction point: While AI increased productivity at the company, "Over time, this rhythm raised expectations for speed," the researchers wrote. * "Many workers noted that they were doing more at once" than before AI, "even though the time savings from automation had ostensibly been meant to reduce such pressure." That could be bad for both company leadership and employees. * Overwork can impair judgment and lead to errors, and cause "fatigue burnout, and a growing sense that work is harder to step away from." Reality check: Studies show that long-term AI use may not be healthy. * A study last year from researchers at MIT's Media Lab found that regular ChatGPT users underperformed their peers "at neural, linguistic, and behavioral levels." What we're watching: The researchers suggested that companies institute an "AI practice" to address overwork, which include routines around starting, stopping, and limiting scope of AI use. Zoom out: AI's transformation of the labor market has mostly left hiring and firing at a stalemate, but its impact is still growing, data shows.
[7]
AI Is a Burnout Machine
Some software engineers are finding that AI is speeding up their work, but at a cost: it's also accelerating them towards burnout. Siddhant Khare is one of those programmers. In an interview with Business Insider, he lamented that while AI has made him more productive, it's also led him to feel that his job was harder than ever. "We used to call it an engineer, now it is like a reviewer," Khare told BI. "Every time it feels like you are a judge at an assembly line and that assembly line is never-ending, you just keep stamping those [pull requests]." AI, he argued, creates a productivity "paradox" by lowering the cost of production, but increasing the cost of "coordination, review, and decision-making," all of which falls on a human to solve. "I shipped more code last quarter than any quarter in my career," he wrote in an essay, titled "AI Fatigue Is Real," posted on his blog. "I also felt more drained than any quarter in my career." Khare's account echoes the findings of a recent study reported in Harvard Business Review. After closely monitoring the two hundred employees at a US tech company, the researchers observed that AI was actually intensifying work, instead of reducing workloads. Forming a vicious cycle, AI "accelerated certain tasks, which raised expectations for speed; higher speed made workers more reliant on AI," the researchers wrote. "Increased reliance widened the scope of what workers attempted, and a wider scope further expanded the quantity and density of work." AI adoption was voluntary at the company, and the initial enthusiasm for experimenting with the AI tools helped boost productivity. But this caused a nefarious "workload creep," in which the employees, without realizing it, took on more tasks than was sustainable for them to keep doing. Multitasking also became more common, with some employees finding that they were no longer focusing on one task, and instead were continually switching their attention, creating the sense that they were "always juggling." Khare described something similar. Before AI, he wrote in his essay, he might spend a "full day" in "deep focus" over one problem. "Now? I might touch six different problems in a day," he wrote. "Each one 'only takes an hour with AI.' But context-switching between six problems is brutally expensive for the human brain. The AI doesn't get tired between problems. I do." Khare also blames AI for why his coding skills have seemed to regress. "It's like GPS and navigation. Before GPS, you built mental maps. You knew your city. You could reason about routes," Khare wrote in his blog post. "After years of GPS, you can't navigate without it. The skill atrophied because you stopped using it." That said, Khare isn't anti-AI. He believes he can find a way to keep using it in a healthy way, which in a different context might be interpreted as the cliché excuse of an addict. He's experimented with several ways to keep his AI habit in check and recommended some for his readers, too. But some of the onus also falls on AI companies, he argues. "You need to keep some sort of guardrails for the humans, so they don't self-destruct themselves," Khare told BI.
[8]
In the workforce, AI is having the opposite effect it was supposed to, UC Berkeley researchers warn | Fortune
AI is making workers more productive, but it could also be burning them out, according to a new study by researchers at the University of California -- Berkeley. The revolution and the skyrocketing productivity AI promised is already taking hold in corporate America, including at an unnamed 200-person U.S. tech firm studied by the Berkeley researchers, an article about the in-progress research published in the Harvard Business Review shows. Over the course of eight months and with the help of 40 "in-depth" interviews across engineering, product, design, research, and operations, the researchers found employees using AI tools increased both the work they could complete as well as the variety of tasks they could tackle -- even when they weren't forced to adopt the technology. Yet, as employees' productivity increased, so did the amount of work they took on, in part because AI made it easy to begin tasks. Soon, some workers were using up what previously had been natural breaks during the day to prompt AI, eventually filling most of their time at the office with tasks. This type of implicit pressure paired with a lack of time to recharge could lead workers to be less productive, Rebecca Silverstein, a licensed clinical social worker and the program director at Brooklyn-based Elevate Point, told Fortune. When workers take up every part of their day with tasks and sacrifice their breaks, they give up the interpersonal relationships that are just as important to a person's work life as their actual work. People also need these breaks, either during the day or after work, to recharge and have the capacity to work effectively, she added. "Just focusing on that productivity mindset, in the long term, is super harmful for someone," Silverstein said. And as one worker who was interviewed by the Berkeley researchers put it: "You had thought that maybe, 'Oh, because you could be more productive with AI, then you save some time, you can work less.' But then really, you don't work less. You just work the same amount or even more." The researchers warned that while the idea of workers taking on more tasks voluntarily could seem ideal, nonstop work has the potential to lead to problems down the line, including blurring the boundary between work and non-work, as well as burnout and cognitive fatigue. Worse yet, employees' focus on supercharging their productivity could potentially lead to lower-quality work, the researchers found. In AI, workers described having a "partner" that helped them take on a larger variety of tasks, and yet, doing so led to more multi-tasking and task-switching, which has been shown in previous studies to decrease productivity. When workers found that each of them was doing more work with the help of technology, this created implicit pressure that weighed on them mentally, the researchers found. To battle the trend of AI overload, the U.C. Berkeley researchers recommended organizations take the time to be intentional. They suggested incorporating pauses into work to better evaluate decisions or reconsider assumptions, as well as organizing work so as to protect employees' windows of focus without interruption. Companies should also prioritize human connection and social exchange, the researchers said. Josh Cardoz, who advises organizations on enabling people in the AI era in his work as chief creative and learning officer at Sponge, told Fortune organizations also need to make sure that by encouraging AI use either explicitly or implicitly, they are not sacrificing work quality. These changes have to come from the top, he said. Company leaders need to define explicitly what AI fluency means for employees depending on their role. When they make decisions about AI strategy, they should encourage employees' input. Those workers who are already making the most out of AI should also be uplifted by the company, he said. Most importantly, when it comes to this rapid change in the workplace, Cardoz said companies need to get back to basics by encouraging employees to adopt the new technology, but also assuring them to help decrease the fear and anxiety that accompanies the unknown. "You need to remember that there's a human factor in all of this," he said.
[9]
AI Promised to Save Time -- Instead It's Created a New Kind of Burnout - Decrypt
The real shift isn't job loss -- it's work intensification and reorganization. A new study published in Harvard Business Review this week confirmed what many workers already suspected: AI tools don't reduce work, they intensify it. The study cited data from UC-Berkeley and Yale, collected during eight months of embedded research at a 200-person tech company, where employees voluntarily adopted AI tools. The results showed distinct patterns of work intensification that quietly snowballed into what researchers call "workload creep." First came task expansion. Product managers began writing code. Researchers took on engineering work. Roles that once came with clear boundaries blurred as workers handled jobs that previously sat outside their remit. AI made that shift feel feasible. "You had thought that maybe, 'oh, because you could be more productive with AI, then you save some time, you can work less,'" one engineer told researchers. "But then, really, you don't work less. You just work the same amount or even more." This created a ripple effect. Engineers suddenly found themselves reviewing, correcting, and coaching colleagues who were, as one participant perfectly described it, vibe-coding. The person who automated part of their job just created more work for someone else. Second came blurred boundaries. AI's conversational interface made starting work feel effortless -- no blank page paralysis, no intimidating learning curve. So workers started sending "quick last prompts" before leaving their desks, letting AI handle chores while they stepped away. Many even used AI prompts during their free time, to the point that AI use for work in non-work hours accumulated into hours and days with fewer natural pauses. Third came a surge in multitasking. Employees were expected to manage multiple workstreams simultaneously, as AI gave the impression that tasks could be handled in the background. The promised productivity gains often translated into constant attention-switching and longer task lists. Put it all together, and you get what researchers define as a self-reinforcing cycle in which AI makes things easier, so workers do more of those things, which ends up making them rely more on AI to make those things easier. Rinse, repeat, burnout. "Several participants noted that although they felt more productive, they did not feel less busy, and in some cases felt busier than before," the researchers note. Workers are slowly being laid off, and those who remain are just being stretched to the point of burnout. A new DHR Global survey of 1,500 corporate professionals found 83% experiencing burnout, with overwhelming workloads and excessive hours as the top culprits. Back in 2024, the Upwork Research Institute reported that 77% of employees using AI said these tools had decreased their productivity and increased their workload. This year, the same institute reported that the most in-demand skills over the last few months have been related to AI. The Berkeley researchers emphasize that this work expansion might look productive in the short term, but could give way to cognitive fatigue, weakened decision-making, and eventually turnover as workers realize their workload has grown while they were busy experimenting with ChatGPT. Their solution: companies need an "AI practice," or intentional norms around AI use. Think structured pauses before major decisions, sequencing work to reduce context-switching, and protecting time for actual human connection. "Without such practices, the natural tendency of AI-assisted work is not contraction but intensification, with implications for burnout, decision quality, and long-term sustainability," the researchers concluded. The data also showed a sharp gap by seniority. Burnout was reported by 62% of associates and 61% of entry-level workers, versus 38% among C-suite leaders.
[10]
Researchers Studied What Happens When Workplaces Seriously Embrace AI, and the Results May Make You Nervous
Even if AI is -- or eventually becomes -- an incredible automation tool, will it make workers' lives easier? That's the big question explored in an ongoing study by researchers from UC Berkeley's Haas School of Business. And so far, it's not looking good for the rank and file. In a piece for Harvard Business Review, the research team's Aruna Ranganathan and Xinqi Maggie Ye reported that after closely monitoring a tech company with two hundred employees for eight months, they found that AI actually intensified the work they had to do, instead of reducing it. This "workload creep," in which the employees took on more tasks than what was sustainable for them to keep doing, can create vicious cycle that leads to fatigue, burnout, and lower quality work. "You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less," one of the employees told the researchers. "But then really, you don't work less. You just work the same amount or even more." The tech company in the study provided AI tools to its workers, but didn't mandate that they use them. Adoption was voluntary. The researchers described how many employees, on their own initiative, eagerly experimented with AI tools at first, "because AI made 'doing more' feel possible, accessible, and in many cases intrinsically rewarding." This resulted in some workers increasingly absorbing tasks they'd normally outsource, the researchers said, or would've justified hiring additional help to cover. One consequence is that once the novelty of adopting AI wears off, the employees realize they've added more to their plate than they can handle. But other effects reverberated to the broader workplace. Engineers, for example, found themselves spending more time correcting the AI-generated code passed off by their coworkers. AI also led to more multitasking, with some choosing to manually write code while an AI agent, or even multiple AI agents, cranked out their own version in the background. Rather than being focused on one task, they were continually switching their attention, creating the sense that they were "always juggling," the researchers said. Others realized that AI had managed to slowly infiltrate their free time, with employees prompting their AI tools during lunch breaks, meetings, or right before stepping away from their PC. This blurred the line between work and non-work, the researchers wrote, with some employees describing that their downtime no longer felt as rejuvenating. In sum, the AI tools created a vicious cycle: it "accelerated certain tasks, which raised expectations for speed; higher speed made workers more reliant on AI. Increased reliance widened the scope of what workers attempted, and a wider scope further expanded the quantity and density of work." The Berkeley Haas team's findings add to a growing body of evidence that cuts against the AI industry's promise that its tools will bring productivity miracles. The vast majority of companies that adopted AI saw no meaningful growth in revenue, a MIT study found. Other research has shown that AI agents frequently fail at common remote work and office tasks. And at least one study documented how employees used AI to produce shoddy "workslop" that their coworkers had to fix -- not unlike the engineers forced to correct their vibe-coding colleagues in the Berkeley Haas study -- breeding resentment and bogging down productivity. Employees remain ambivalent on the tech, with a recent survey finding that 40 percent of white collar workers not in management roles thought that AI saved them no time at work. The Berkeley Haas researchers optimistically suggest that companies should institute stronger guidelines and provide structure on how the tech is used. But it's clear that AI can easily produce negative knock-on effects that are difficult to manage, and which we're still unpacking.
[11]
AI Promised to Save Time. Researchers Just Said It's Doing the Opposite
An ongoing study, published in the Harvard Business Review, joins growing bodies of evidence that AI isn't reducing workloads at all. Instead, it appears to be intensifying them. Researchers spent eight months examining how generative AI reshaped work habits at a U.S.-based technology company with roughly 200 employees. They found that after adopting AI tools, workers moved faster, took on a wider range of tasks, and extended their work into more hours of the day, even if no one asked them to do so. Importantly, the company never required employees to use AI. It simply offered subscriptions to commercially available tools and left adoption up to individuals. Still, many workers embraced the technology enthusiastically because AI made "doing more" feel easier and more rewarding, researchers said.
[12]
AI adoption increased workload, didn't reduce it, says Harvard study
Without effective guardrails, AI turns work efficiency into employee burnout Automation will free us from drudgery, from repetitive work. Gone will be the tyranny of the overflowing inbox and the never-ending to-do list. Humans will be free to do other creative tasks, with more free time. Doesn't this sound like every major tech leap's promise through history? Generative AI arrived harping on that same promise, when ChatGPT announced itself to the world back in 2022. However, early conclusions on an eight month study being conducted by Harvard research is now revealing something far less comforting. In a US-based tech company with 200 employees, the Harvard study found that instead of reducing work, AI is quietly intensifying it. The Harvard study claims that once employees gained access to generative AI tools, they didn't work less. They worked faster, took on more tasks, and extended their work across more hours of the day - often voluntarily. In other words, AI didn't shrink the workday. It expanded it. On paper, the logic seems simple. If AI can draft documents, write code, analyse data and summarise reports, surely the human workload must fall? It's the assumption that's driving GenAI adoption at workplaces across the board, not just in the Harvard research study. Also read: Will AI take over jobs? Goldman Sachs predicts automation of 25 pct of work hours In practice, the opposite happens. AI lowers the barrier to starting almost any task. The difficulty of writing something on a blank page disappears. For someone who's never coded before, the unfamiliar coding language becomes approachable. Researching anything becomes easier. And just like that, tasks that once required specialists or postponed indefinitely feel doable. So people do them. Work that once required collaboration, delegation or new hiring quietly gets absorbed into existing roles. For example, product managers dabble in writing code, designers experiment with data analysis, as AI starts to erase functional boundaries of employees. They start to feel everything is possible with the help of AI - at least in the beginning of the adoption curve, suggests the Harvard study. What emerges isn't efficiency of work, but expansion of work hours. Because prompting an AI feels conversational - almost casual - work begins to slip into the cracks of daily life. A quick prompt during lunch. A draft refined while waiting for a meeting to start. A "last prompt" before stepping away from the desk so the AI can work in the background. None of these moments feel like real work. Yet together, they create a workday with no pause button and almost no true downtime. Also read: Workspace Studio explained: AI agents will automate more work, believes Google AI also allows employees to feel they can juggle several things at once in parallel. Employees write while AI generates alternatives. Long-ignored tasks are revived because "the AI can handle it." Expectations rise accordingly as well - if work can be done faster, more work will be done. And soon, what was once impressive starts to become the new normal. Over time, this always-on workday directly threatens the efficiency it claims to unlock during the early adoption curve of GenAI at workplaces. Without deliberate breaks, cognitive fatigue builds in employees, and their quality of decision-making starts to fall. What initially feels like a GenAI-fuelled productivity surge slowly and eventually morphs into quiet burnout. None of this means AI is harmful by itself, of course. It just means AI is powerful - and power, without structure (in this case) can exhaust the system supporting it. At least, that's what the Harvard study is trying to get at. Without clear norms around when to use AI at work, when to pause, and when to stop, work will naturally intensify. Productivity gains will be real, but so will fatigue, errors and attrition. Perhaps the real promise of AI was never about doing less work. Maybe it's about redefining what meaningful work looks like, and how much of it should we be doing at all. Food for thought?
Share
Share
Copy Link
A Harvard Business Review study tracked 200 tech workers for eight months and found that AI tools led to work intensification rather than relief. Employees took on more tasks, worked longer hours, and experienced burnout despite productivity gains. The research reveals AI's promise to save workers from their jobs may be creating a different problem entirely.
The narrative that AI will save workers from their jobs has dominated work culture for three years, but new research published in Harvard Business Review reveals a troubling reality. Researchers from the University of California, Berkeley embedded themselves in a 200-person technology company for eight months, conducting more than 40 in-depth interviews to understand what happens when employees genuinely embrace AI
1
. What they discovered wasn't a productivity revolution but a pattern of employee burnout driven by work intensification.
Source: Digit
The company studied offered enterprise-level subscriptions to generative AI products without mandating their use
2
. Employees voluntarily adopted AI tools and initially experienced productivity gains. They worked faster and took on more responsibilities. But the unintended consequences quickly emerged. As one engineer told researchers, "You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less. But then really, you don't work less. You just work the same amount or even more"1
.
Source: Decrypt
The study documented how AI enabled employees to expand their roles beyond traditional boundaries. Product managers and designers began writing code, researchers took on engineering tasks, and individuals across the organization attempted work they would have outsourced, deferred, or avoided entirely in the past
3
. This task expansion meant employees absorbed work that might previously have justified additional help or headcount .The always-available nature of AI tools blurred work-life boundaries. Workers threw out prompts while eating lunch, waiting for coffee, or during breaks
3
. They submitted queries during meetings and asked quick questions after logging off2
. This conversational style made it difficult to distinguish between work and socializing, resulting in longer workdays and reduced work-life balance.
Source: Fortune
After an initial productivity surge, employees produced lower quality work overall
2
. Workers without programming experience used AI to "vibe-code" solutions, then informally requested engineers' help to finish partially completed pull requests3
. Tasks that would have been better handled by professionals instead required additional support to complete.The mental load increased significantly through multitasking. Workers juggled multiple tasks simultaneously as AI worked in the background, forcing them to jump between responsibilities and check AI outputs frequently
3
. Even when employees felt they had a digital partner, their cognitive loads didn't decrease, and expectations to deliver results quickly persisted because they were using AI2
.Ellen Scott, Digital Editor of Stylist and author of Working on Purpose, coined the term "smoothout" to describe this phenomenon—a type of burnout from over-reliance on AI that removes challenge and healthy stress from work
5
. "When we don't have sufficient challenges, or the opportunity for the mental-health-boosting experience of mastery, our sense of accomplishment drops," Scott explains5
.On Hacker News, one commenter captured the experience: "Since my team has jumped into an AI everything working style, expectations have tripled, stress has tripled and actual productivity has only gone up by maybe 10%. It feels like leadership is putting immense pressure on everyone to prove their investment in AI is worth it"
1
. This matches findings from a 2025 enterprise report from OpenAI showing employees only saved an average of 40 to 60 minutes a week2
.The Berkeley researchers found that augmentation leads to "fatigue, burnout, and a growing sense that work is harder to step away from, especially as organizational expectations for speed and responsiveness rise"
1
. The study confirms AI can augment what employees do on their own, then shows where that augmentation actually leads.Related Stories
Research increasingly links heavy AI use to weaker critical thinking and learning skills
5
. The concept of cognitive offloading—using tools to reduce mental effort—takes on new dimensions with AI that can think with us or instead of us across a wide range of tasks. A 2024 Pew survey found that 64 percent of workers reported being extremely or very satisfied with relationships with co-workers, the most satisfying aspect of jobs, while skills development ranked low at 37 percent .Researchers Aruna Ranganathan and Xingqi Maggie Ye offer solutions centered on work culture and norms. These include protecting time for human connection, prioritizing quality results over speed, and ensuring employees have blocked focus time without AI interruption
2
. Being intentional with AI usage both in and outside of work prevents misuse and maintains quality standards.Scott emphasizes that AI should handle tasks that aren't beneficial for mental or physical wellbeing—the monotonous, administrative work—not the "meaty" parts that create challenge, engagement, and job satisfaction
5
. The industry bet that helping people do more would solve everything. It may turn out to be the beginning of a different problem entirely, one where the pressure to succeed comes from employees themselves rather than explicit mandates3
.Summarized by
Navi
1
Business and Economy

2
Policy and Regulation

3
Policy and Regulation
