17 Sources
17 Sources
[1]
Your coworkers are sick of your AI workslop
Bosses have to pick up hours of slack to fix it, harming careers. Workers are becoming overly reliant on AI. The result? Lackluster product, now coined "workslop," according to new research from BetterUp Labs and Stanford Social Media Lab. Also: Stop using AI for these 9 work tasks - here's why Workslop -- which the researchers defined as "AI-generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task" in an accompanying write-up for Harvard Business Review (HBR) -- has some serious impacts. Forty percent of the 1,150 employees BetterUp and Stanford surveyed reported receiving workslop in the past month. It mostly occurs between peers but is also sent to managers by direct reports. Employees taking the easy way out of a work assignment isn't new, but the tools they're using to do so are. AI tools, like ChatGPT, Gemini, and various task-specific agents, are fixing code, creating presentation slides, generating text, and summarizing emails or articles for workers. As workers hand over more tasks to AI assistants and do less of the work themselves, they are turning in poorer results that someone, whether it's a peer or a manager, then has to redo or correct themselves. Workslop exists across industries but disproportionately impacts professional services and technology, the researchers found. Also: Nearly everything you've heard about AI and job cuts is wrong - here's why "The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work. In other words, it transfers the effort from creator to receiver," the researchers wrote in HBR. Workslop signifies more than a simple over-reliance on technology to accomplish tasks. Not only are workers automating their reponsibilities to their detriment, but their AI use results in their coworker or superior needing to clean up the sloppy mess they've created. "Workslop uniquely uses machines to offload cognitive work to another human being," the authors wrote. Worksloppy employees tend to be viewed more negatively by their peers and managers, according to the research: Half of respondents say they view workers who turn in workslop as less creative, reliable, and capable. Also: The AI complexity paradox: More productivity, more responsibilities The term also highlights a growing contradiction between AI's promises of productivity and the reality it's creating. Some reports show that AI is improving productivity, often in coding, while others depict a more complicated story. Despite chatter from AI startups and tech giants that the technology will supercharge productivity, AI's ROI has yet to be fully realized. Just 5% of companies have seen a return on investment in the technology yet, according to a recent MIT report. In fact, the BetterUp and Stanford survey, which is ongoing, found that workslop added an hour and 56 minutes of work onto the workslop receiver. Also: 10 ChatGPT Codex secrets I only learned after 60 hours of pair programming with it "I had to waste more time following up on the information and checking it with my own research. I then had to waste even more time setting up meetings with other supervisors to address the issue. Then I continued to waste my own time having to redo the work myself," one survey respondent said.
[2]
AI 'Workslop' Is Plaguing American Companies, Says Stanford Research
Based in London, Will is passionate about the intersection of tech and human lives, with bylines in BBC News, Vice, and more. If you've been on TikTok, YouTube, Facebook, or even LinkedIn recently, you're probably well aware of "AI slop" -- the low-quality, often misleading content now clogging up news feeds everywhere. But now Stanford researchers have coined a new term for when this type of low-quality AI content manifests in the workplace: "workslop." Over 40% of US-based full-time employees reported receiving "workslop" in the last month, according to a new survey by the Stanford Social Media Lab and BetterUp Labs, first published in Harvard Business Review. The researchers defined workslop as AI-generated work content that "masquerades as good work but lacks the substance to meaningfully advance a given task." Employees surveyed said an average of 15.4% of the content they receive now qualifies as workslop. The phenomenon occurs mostly between peers, where it accounts for 40% of the work handed over. But workslop is also sent to managers by direct reports, where it accounts for 18% of the work. The scale of the issue depended on the industry looked at, with technology and professional services being the industries most impacted. Unsurprisingly, the survey found handing in workslop didn't leave the best impression on co-workers. Roughly half of the people surveyed viewed colleagues who sent workslop as less creative, capable, and reliable than before. Meanwhile, 42% saw them as less trustworthy, and 37% saw that colleague as less intelligent. This isn't the first time research has emerged which implies that AI is causing just as many problems as it solves in the workplace. Earlier this year, a paper from the University of Chicago and the University of Copenhagen found that though AI created modest productivity gains per worker -- just a few hours in an average working month -- this was actually counteracted by the new tasks created by AI. For example, a teacher might shave a few hours off their workweek using AI for lesson planning, only to lose a few more checking students' homework for AI usage. Meanwhile, a study looking specifically at programmers found that AI coding tools actually slowed them down while working on a complex set of tasks, as the AI-assisted group spent so much time prompting the AI or checking its output.
[3]
Many employees are using AI to create 'workslop'
Remember when AI was supposed to make us more productive, not hate each other? ai-pocalypse Workers are getting lazy about using AI to do their jobs for them, and the results are both costly and increasing distrust in the workplace. An ongoing study by Stanford's Social Media Lab and behavioral research business BetterUp Labs says 40 percent of US workers are reporting AI-generated garbage, dubbed "workslop," coming into their work lives in the last month. Essentially, staffers are sending around AI-generated material that may look impressive, but contains very little in the way of actionable facts and figures, which someone else then needs to sort out and turn into something useful. Workslop is the machine-learning equivalent of spam, and the study claims that the amount of work involved in sorting out the wheat from the chaff, or facts from hallucinations, in such content, costs around $186 per employee per month in lost productivity, a few bucks less than a ChatGPT Pro account. Furthermore, once someone receives this kind of content, over half of the respondents reported feeling annoyed, over a third said they were confused, and nearly a quarter said the messages offended them. Also, recipients see senders themselves as more untrustworthy. Forty-two percent of survey respondents said that after receiving such data garbage, they trusted the coworker sending it less, and over a third took it as a sign that the sender was less creative and intelligent than they first thought. Many said such material was more trouble than it's worth. "It was just a little confusing to understand what was actually going on in the email and what he actually meant to say," one tech boss told the researchers. "It probably took an hour or two of time just to congregate [sic] everybody and repeat the information in a clear and concise way." Unsurprisingly, the tech industry is one of the biggest generators of workslop, with professional services also highlighted as a key generator. This stream of AI-generated effluent flows both ways, the survey found. Workers sent 18 percent of workslop directly to managers, but respondents said that 16 percent of such content came from managers themselves. It seems like both sides of the corporate world are happy to let AI do their thinking for them. "It created a situation where I had to decide whether I would rewrite it myself, make him rewrite it, or just call it good enough," one finance industry respondent told the surveyors. "It is furthering the agenda of creating a mentally lazy, slow-thinking society that will become wholly dependent [sic] upon outside forces." With more companies insisting staff rely more on AI - or face losing their jobs - the rise in workslop is somewhat understandable. Since staff are growing increasingly likely to use the technology, the temptation to take shortcuts is more probable and - like AI outputs in general, OpenAI has acknowledged - it's better to put something out there than nothing at all. But the study raises more questions about the actual productivity gains from AI in the workplace. A recent study by the UK government found no clear productivity improvement from introducing Microsoft 365 Copilot in the Department for Business and Trade, and research [PDF] by MIT reports that about 95 percent of organizations see no measurable return on investment from generative AI efforts. But AI is everywhere, and firms are keen to make this happen. The survey is ongoing, but the early results aren't good. While it's easy to use AI to produce work that appears to be good enough, actually getting things right takes some skill. ®
[4]
AI-generated 'workslop' is here. It's killing teamwork and causing a multimillion dollar productivity problem, researchers say
Something strange was happening at Jeff Hancock's work. It was 2022, just after OpenAI's release of ChatGPT to the masses, and the Stanford professor noticed something was off in the research assignments he was grading. "They looked pretty good, but not quite right," Hancock tells CNBC Make It. "And then because I had 100 students, I could see that 10 other assignments looked exactly the same with the same sort of not-quite-rightness." The papers in question seemed to have a lot of text without saying anything substantive to "advance the work," and they all did so in the same overly wordy style. Kate Niederhoffer felt the same sinking feeling of suspicion when she was once asked to speak about her research, but the request summarized her studies in a way that revealed they didn't actually know her work. Reading messages that missed the mark "felt like deep effort," Niederhoffer says. "I'm a quick reader, normally, so I [thought] 'Why is this feeling so effortful? Also this is so confusing?'" Niederhoffer and Hancock now have a name for this phenomenon, the feeling you get when you're reading a message or document that's so convoluted or incomplete in thought that you start to wonder, "Wait, did a human even write this, or is this AI?"
[5]
Companies are losing money to AI "workslop" that slows everything down
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. Cutting corners: Large language models excel at producing grammatically correct sentences but often stumble on accuracy and clarity. Without human review, their outputs create more confusion than progress. This workslop shifts effort downstream, bogging down the very workplace processes AI is supposed to make faster and more efficient. Modern workplaces are increasingly adopting artificial intelligence, promising speed, efficiency, and innovation. However, the reality is often messier in practice. Many companies feel pressured to adopt AI quickly, worried that failing to do so will leave them behind competitors. Yet work produced by AI can create more correction and confusion than it saves, a phenomenon Harvard Business Review (HBR) has termed "workslop." Research from HBR's BetterUp Labs and Stanford Social Media Lab shows that AI-generated documents that appear polished can lack the substance needed to advance a task. According to Stanford's ongoing survey of US-based full-time employees, 40 percent reported receiving such outputs in the past month. Workers spend nearly two hours per incident correcting or interpreting them, creating significant hidden costs for companies. Multiplied across large organizations, those hours translate into thousands of lost workdays each year and millions of dollars in wasted effort. Harvard Business Review cited one retail director who was less than impressed with his company's implementation of AI automation. "I had to waste more time following up on the information and checking it with my own research," the director said. "I then had to waste even more time setting up meetings with other supervisors to address the issue. Then I continued to waste my own time having to redo the work myself." That manager's frustration isn't an isolated case. The social and emotional toll is real. Over half of respondents said receiving low-quality AI outputs made them feel annoyed (53 percent), while nearly a quarter reported feeling offended (22 percent). Colleagues who sent such work were often seen as less capable or reliable, showing how AI missteps can ripple through team dynamics. Even with AI adoption soaring - Gallup reports that US employees using AI at least a few times a year have nearly doubled in recent years - many pilot programs fail to generate measurable returns. An MIT Media Lab study found that fewer than one in ten AI projects delivered real revenue gains, warning that "95 percent of organizations are getting zero return" on their AI bets. The challenge isn't just the technology itself, but how organizations deploy it. Blanket mandates to use AI everywhere often encourage mindless copy-paste behavior rather than thoughtful application. Researchers recommend clear guardrails, deliberate workflows, and leaders who set the example for using AI effectively. That can mean setting limits on where AI is appropriate, such as early drafts or routine summaries, while requiring human oversight for final outputs. When management models selective, purposeful use, employees are more likely to see AI as a tool rather than a shortcut. Image credit: Harvard Business Review
[6]
'Workslop': AI-Generated Work Content Is Slowing Everything Down
AI slop has infiltrated the workplace, costing companies time and money. AI slop isn’t limited to cringey cat videos on Facebook anymore; it has made its way into the workplace. The Harvard Business Review recently coined a term for low-quality, AI-generated work documentsâ€"workslop. The respected business publication argues that this growing pile of phoned-in memos and reports is one reason many companies are seeing little return on their AI investments. The report lands as the AI industry keeps booming. The U.N. recently projected the global AI market will rocket from $189 billion in 2023 to a staggering $4.8 trillion by 2033. In the U.S., the share of employees who say they use AI at least a few times a year has nearly doubled from 21% to 40%, according to Gallup. And Accenture reported that the number of companies running fully AI-driven processes nearly doubled in the past year. But as offices everywhere are scrambling to plug AI tools into their workflows so they don’t get left behind, very few are seeing their efforts actually pay off. Just last month, an MIT Media Lab study found that fewer than one in ten AI pilot projects delivered real revenue gains and warned that “95 percent of organizations are getting zero return†on their AI bets. Based on 150 executive interviews, a survey of 350 employees, and an analysis of 300 public AI deployments, the report triggered a dip in AI stocks. Now, researchers from Harvard Business Review’s BetterUp Labs, working with the Stanford Social Media Lab, are pointing to workslop as a possible culprit behind those disappointing results. “The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work. In other words, it transfers the effort from creator to receiver,†the report’s authors wrote. The Harvard Business Review defines workslop as, “AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.†This can look like polished presentation slides, eloquent report summaries, and even decent code, but on closer inspection, the work can be missing key context and is ultimately unhelpful. According to the researchers’ ongoing survey, the problem is widespread. Of 1,150 U.S.â€"based full-time employees across industries, 40% said they had received workslop in the past month. The proliferation of workslop could cost companies time, money, and even trust among workers. Surveyed workers reported spending an average of one hour and 56 minutes per incident dealing with low-quality AI outputs. Researchers calculated that, based on respondents’ salaries, workslop carries an invisible cost of around $186 per month. For companies with thousands of employees, that can translate into millions of dollars in lost productivity each year. Workslop also takes a social and emotional toll in the office. When asked, 53% of participants said receiving workslop made them feel annoyed, 38% confused, and 22% offended. Half of the respondents also reported viewing colleagues who sent workslop as less capable and reliable. To steer clear of “workslop,†researchers suggested managers need to set clear guardrails and model thoughtful and purposeful use of AI themselves. Blanket “AI everywhere all the time†mandates just lead to workers mindlessly copying and pasting AI responses into documents. Instead, organizations should develop best practices and recommendations regarding how generative AI can truly add value and help achieve company goals.
[7]
AI 'Workslop' Is Killing Productivity and Making Workers Miserable
AI slop is taking over workplaces. Workers said that they thought of their colleagues who filed low-quality AI work as "less creative, capable, and reliable than they did before receiving the output." A joint study by Stanford University researchers and a workplace performance consulting firm published in the Harvard Business Review details the plight of workers who have to fix their colleagues' AI-generated "workslop," which they describe as work content that "masquerades as good work, but lacks the substance to meaningfully advance a given task." The research, based on a survey of 1,150 workers, is the latest analysis to suggest that the injection of AI tools into the workplace has not resulted in some magic productivity boom and instead has just increased the amount of time that workers say they spend fixing low-quality AI-generated "work." The Harvard Business Review study came out the day after a Financial Times analysis of hundreds of earnings reports and shareholder meeting transcripts filed by S&P 500 companies that found huge firms are having trouble articulating the specific benefits of widespread AI adoption but have had no trouble explaining the risks and downsides the technology has posed to their businesses: "The biggest US-listed companies keep talking about artificial intelligence. But other than the 'fear of missing out,' few appear to be able to describe how the technology is changing their businesses for the better," the Financial Times found. "Most of the anticipated benefits, such as increased productivity, were vaguely stated and harder to categorize than the risks." Other recent surveys and studies also paint a grim picture of AI in the workplace. The main story seems to be that there is widespread adoption of AI, but that it's not proving to be that useful, has not resulted in widespread productivity gains, and often ends up creating messes that human beings have to clean up. Human workers see their colleagues who use AI as less competent, according to another study published in Harvard Business Review last month. A July MIT report found that "Despite $30-40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return ... Despite high-profile investment, industry-level transformation remains limited." A June Gallup poll found that AI use among workers doubled over the last two years, and that 40 percent of those polled have used AI at work in some capacity. But the poll found that "many employees are using AI at work without guardrails or guidance," and that "The benefits of using AI in the workplace are not always obvious. According to employees, the most common AI adoption challenge is 'unclear use case or value proposition.'" These studies, anecdotes we have heard from workers, and the rise of industries like "vibe coding cleanup specialists" all suggest that workers are using AI, but that they may not be leading to actual productivity gains for companies. The Harvard Business Review study proposes a possible reason for this phenomenon: Workslop. The authors of that study, who come from Stanford University and the workplace productivity consulting firm BetterUp, suggest that a growing number of workers are using AI tools to make presentations, reports, write emails, and do other work tasks that they then file to their colleagues or bosses; this work often appears useful but is not: "Workslop uniquely uses machines to offload cognitive work to another human being. When coworkers receive workslop, they are often required to take on the burden of decoding the content, inferring missed or false context. A cascade of effortful and complex decision-making processes may follow, including rework and uncomfortable exchanges with colleagues," they write. The researchers say that surveyed workers told them that they are now spending their time trying to figure out if any specific piece of work was created using AI tools, to identify possible hallucinations in the work, and then to manage the employee who turned in workslop. Surveyed workers reported spending time actually fixing the work, but the researchers found that "the most alarming cost may have been interpersonal." "Low effort, unhelpful AI generated work is having a significant impact on collaboration at work," they wrote. "Approximately half of the people we surveyed viewed colleagues who sent workslop as less creative, capable, and reliable than they did before receiving the output. Forty-two percent saw them as less trustworthy, and 37% saw that colleague as less intelligent." No single study on AI in the workplace is going to be definitive, but evidence is mounting that AI is affecting people's work in the same way it's affecting everything else: It is making it easier to output low-quality slop that other people then have to wade through. Meanwhile, Microsoft researchers who spoke to nurses, financial advisers, and teachers who use AI found that the technology makes people "atrophied and unprepared" cognitively. Each study I referenced above has several anecdotes about individual workers who have found specific uses of AI that improve their own productivity and several companies have found uses of AI that have helped automate specific tasks, but most of the studies find that the industry- and economy-wide productivity gains that have been promised by AI companies are not happening. The MIT report calls this the "GenAI Divide," where many companies are pushing expensive AI tools on their workers (and even more workers are using AI without explicit permission), but that few are seeing any actual return from it.
[8]
AI was supposed to boost productivity -- but a new report says 'workslop' is making it worse
If you've used AI tools, you know that they aren't always accurate. A new study even highlighted that ChatGPT is actually wrong 25% of the time. And while some big tech CEOs believe AI will cause mass unemployement, it seems like for now AI is only dragging us down. Companies acrosss the board are deploying AI, but the gains are less than satisfactory. Turns out, much of what these tools produce is useless junk that experts are calling "workslop." Not to be confused with "AI slop," which is all the low-quality AI-generated content you see while scrolling social media. First reported by Harvard Business Review, the term "workslop" describes the flood of AI-generated content that is ultimately of very low value. Instead of soild reports and expert-level presentations, what's generated are sloppy reports, half-baked documents, boilerplate content with little insight or errors that require humans to intervene and fix the job. Basically it's when AI delivers on volume but not quality, which ultimately causes more work. The HBR authors argue that much of this comes down to misplaced incentives. Businesses adopt AI tools to move faster, not necessarily to improve quality. Workers lean on the AI for support, even if the output isn't worth much. Because AI content is so cheap to generate, organizations tolerate "good enough" outputs -- even if employees spend hours cleaning them up later. This doesn't mean we should ditch AI completely. I have a feeling it's here to stay. The key is to learn how to use it properly and in the smartest ways. It's also important to remember that AI is designed for generating ideas and not the whole project. Users should see it as a brainstorming partner or rough-draft generator, but should never rely on it completely to do the entire job. Even if you're not the one using AI at work or haven't uploaded any AI content to social media, chances are you're feeling the ripple effects. Whether it's a report for a client or a video of a cat serving french fries, it wastes time, lowers trust and adds hidden costs. Not to mention there are few things more frustrating than expecting AI to lighten your load only to find yourself babysitting it -- or worse, fixing its mistakes. AI has a lot of potential for productivity support and helping us manage workflows. But only if it actually saves time and produces value. If it's just creating busywork that drags you down, it's not a tool. It's a sloppy trap. Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!
[9]
First, AI flooded the internet with slop, now it's destroying work, too - this is how you use AI and still be a stellar employee
If there's one thing we can depend on AI for, it's to prove time and time again that you can't simply replace human effort with technology. A new Harvard Business Review and Stanford Media Lab study found that "workslop" is overrunning business and, in the process, ruining work and reputations. If workslop sounds familiar, that's because it's a cousin to AI slop. The latter is all over the internet and characterized by bad art, poor writing, six-fingered videos, and auto-tuned-sounding music. Workslop, according to HBR, is "AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task." Because we're living on AI Time and everything in technology (and life) seems, thanks to generative AI, to be happening at three times its normal pace, we suddenly have Large Language Model (LLM)-driven AI in every corner of our lives. Generative platforms like Gemini, Copilot, Claude, and ChatGPT live on our phones, and while Google search still far outstrips ChatGPT as a tool for basic search results, more and more people are turning to ChatGPT when they want deeper, richer, and theoretically more useful answers. That trend continues in the workplace, where, seemingly overnight, tools like Gemini and Copilot are embedded in productivity apps like Gmail and Microsoft Word. They're capable of generating: And it's clear from this report that there has been a quick and broad embrace of these tools for these and many other office tasks. In fact, workers might be squeezing a little too tight. In the study, 40% of respondents reported receiving workslop, and they're none too happy about it. They report being confused and even offended. Even worse, workslop is changing how they view coworkers. The problem with workslop is that while it appears to be complete and high-quality work, it is often not. AI can still produce errors and hallucinations. OpenAI's GPT-5 model is the first major update to address the hallucination issue, stopping ChatGPT from filling in the blanks with guesswork when it doesn't know something. Still, it and other AIs are not perfect. The work is often weirdly cookie-cutter, in that these are still programs (highly complex ones) that rely on a handful of go-to terms like "delve", "pivotal", "realm", and "underscore." It's not clear if the workers using AI to build reports and projects recognize this, but their coworkers and managers appear to be aware, and let's just say that the workers' next performance evaluations may not be recognizing them for "originality." According to the report, peers perceive that AI-work-product-delivering coworkers as less capable, reliable, and trustworthy. They also think they're less creative and intelligent. Now, that seems a bit unfair. After all, it does take some effort to create a prompt or series of prompts that will result in a finished project. Still, the reaction to this workslop indicates that people are not necessarily curating the work. Instead of a series of prompts delivered to the AI to create some output, they might be plugging in one prompt, seeing the results, and deciding, "That's good enough." The cycle of unhappiness continues when managers and peers report this workslop to their managers. It's a bad look all around, especially if the workslop makes it out of a company and into a client's hands. What's been lost in this rush to use generative AI as a workplace tool is that it was never intended to replace us or, more specifically, our brains. The best work comes from our creative spark and deep knowledge of context, two things AI decidedly lacks. When I asked ChatGPT, "Do you think it's a good idea for me to ask you to do work for me and then for me to present it to my boss?" it did a decent job of putting the issue in perspective. Mostly, ChatGPT discussed how it can help in research and outlining the first version of a project, being a time saver to cut down on repetitive tasks, and helping me generate fresh ideas. It warned me, however, about It was almost as if ChatGPT had already read the HBR study. Even it knows workslop is bad. How do we avoid workslop? HBR had some ideas, and I think it's pretty simple. Remind everyone that AI is not the answer to every problem. Ensure that everyone knows when it's best to use AI and understands what should happen to the AI output, i.e., editing, fact-checking, shaping, or rewriting. Start viewing AI as a very smart assistant, not as another, smarter version of yourself. Insist on more in-person meetings and direct collaboration. Reembrace the beauty of a brainstorm. Workslop, like AI slop before it, will surely get worse before it gets better, and there is a real chance that we may soon no longer know the difference between original human work and AI-generated projects, but I hope that day never comes. We can figure this out. Even ChatGPT knows the answer:
[10]
AI '"workslop" is crushing workplace efficiency, study finds
Why it matters: The term, coined by researchers in the latest Harvard Business Review, describes low-quality AI-generated content -- memos, reports, emails -- that's clogging up employees' lives and wasting their time. What they're saying: Workslop "appears polished but lacks real substance," write researchers from Stanford University who collaborated with BetterUp, a leadership coaching platform in San Francisco. * "[Y]ou might recall the feeling of confusion after opening such a document, followed by frustration," they write. "You begin to wonder if the sender simply used AI to generate large blocks of text instead of thinking it through." * "If this sounds familiar, you have been workslopped." By the numbers: In August and September, the researchers surveyed 1,150 U.S. adults, who described themselves as desk workers, about their experiences with "workslop." They didn't name the term, but merely defined it. * 40% said they'd encountered this stuff in the last month -- slowing down their work day. * They reported spending an average of 1 hour and 56 minutes dealing with each instance. Zoom out: Slop carries real costs -- looking at the average respondent salary, the researchers estimate workslop incidents costs $186 per month. * For a large organization, that can add up to more than $9 million a year in lost productivity, per their back-of-the-envelope math. * When asked how workslop made them feel, more than half of respondents said they were annoyed, 38% were confused and 22% were offended. The intrigue: Colleagues look down on workslop senders -- about half of all those surveyed viewed slop senders as less creative, capable and reliable. * And yet they're sending it, too. Of those using AI at work, 18% admitted sending AI-generated content that was "unhelpful, low effort or low quality." The big picture: Workslop is the workplace offshoot of the general run of AI-generated slop most of us see day-to-day -- rabbits jumping on trampolines, fast fashion ads featuring Luigi Mangione, weird uncanny images of hands with the wrong number of fingers, and so forth. * It's also another sign that AI isn't necessarily translating into productivity gains at work. * A report from MIT out a few months ago found that 95% of business' AI pilot projects fail. Zoom in: The researchers noticed workslop in their daily lives, as friends, colleagues and families shared frustrating experiences, said Jeffrey Hancock, director of Stanford's Social Media Lab. * Survey respondents also shared examples -- including from healthcare providers who griped about getting long AI-generated reports from patients that diagnose their health problems using data from Fitbits or Oura rings, without any real medical underpinning. Yes, but: Well before the advent of AI, employees were generating poorly constructed memos, PowerPoints and emails. * Researchers told Axios the workers reporting the most slop were in tech, healthcare and professional services. (Consulting is basically ground zero for the over-wrought slide deck.) The bottom line: You can use AI to make your work better, says Kate Niederhoffer, vice president of BetterUp Labs and one of the researchers.
[11]
Enterprise AI projects aren't producing value. Is 'workslop' one reason why?
Businesses across industries have been betting big on AI in the workplace -- but the results have been, frankly, underwhelming. According to a new MIT Media Lab report, a staggering 95 percent of organizations have seen no measurable return on their investments in generative AI tools. MIT researchers cite several reasons for this adoption/ROI gap. Chief among them: AI doesn't slot neatly into many workplace workflows, and most models still lack the contextual awareness needed to adapt to industry-specific tasks. But a separate team at BetterUp Labs argues there's another culprit: AI workslop. Writing in Harvard Business Review, the researchers define workslop as "AI-generated content that looks polished but doesn't actually move work forward." In practice, it means employees end up spending more time fixing, rewriting, or clarifying AI's "help" than if they'd done the job themselves. The downstream effect is costly. The report estimates that workers spend nearly two hours a day (1 hour, 56 minutes, on average) dealing with workslop -- decoding half-baked ideas, correcting missing details, and reworking content that isn't actually useful. Worse, the burden doesn't just stay with the person who generated the workslop: managers and peers get dragged in, creating a ripple effect across teams. Researchers tie this to cognitive offloading -- using external tools to reduce mental effort. But with workslop, that burden isn't offloaded to a machine; it's offloaded onto a coworker. The phenomenon is most common among peers (40 percent of cases, according to BetterUp), though managers aren't immune: higher-ups throw down workslop to their teams about 16 percent of the time. Overall, BetterUp Labs estimates that companies with 10,000+ employees could be bleeding as much as $9 million a year in lost productivity thanks to the sheer volume of workslop -- roughly 40% of all AI-generated output in the workplace, according to the report. Beyond the financial hit, there's also a cultural cost. Employees in the study reported feeling annoyed, confused, and even offended when handed workslop, eroding trust and reliability among coworkers. While one solution to bad AI output may be to stop using these tools completely, BetterUp Labs argues the smarter path is to set clear organizational guidelines for how employees should (and shouldn't) use AI. That means defining best-case scenarios where AI adds value, and drawing firm boundaries where it doesn't align with company strategy or values. The researchers also suggest a mindset shift: AI should be treated as a collaborator, not a crutch. In other words, it's a tool to support good work -- not a shortcut to avoid doing the work in the first place.
[12]
AI 'workslop' is emerging as the latest office headache
AI may be speeding up your emails, but much of it is turning into "workslop" - machine-generated content that leave colleagues frustrated, confused, and behind schedule, according to new research. A new study from Stanford University and BetterUp Labs says this is becoming a regular feature of office life. In a survey of 1,150 U.S. full-time employees, 40% said they had received AI workslop in the past month. Each instance takes nearly two hours to resolve, carrying an invisible tax of about $186 per worker per month, the researchers calculated. For a company with 10,000 employees, that amounts to more than $9 million a year in lost productivity. "Workslop may feel effortless to create but exacts a toll on the organization," the researchers wrote. Employees surveyed said the effect goes beyond wasted hours. More than half reported feeling annoyed when they receive it, while others said they felt confused and even offended. Nearly half said it made them see colleagues as less capable and less trustworthy. Some were blunt. "It created a situation where I had to decide whether I would rewrite it myself, make him rewrite it, or just call it good enough," said one finance worker. A manager in technology said an AI-written email "probably took an hour or two of time just to congregate everybody and repeat the information in a clear and concise way." The findings come as companies accelerate investment in generative AI. The number of firms with AI-led processes nearly doubled last year, and workplace use of the technology has also doubled since 2023, the study said. Yet research from MIT has found that 95% of firms see no measurable return on their AI spending. Workslop helps explain why. As tools have become more accessible, employees can quickly generate slides, reports, and emails that look convincing but lack substance. "Workslop may feel effortless to create but exacts a toll on the organization," the Stanford authors wrote. The study found workslop flows mostly between peers, but it also moves up and down the ladder. About 18% of respondents said they had sent workslop to managers, while 16% reported receiving it from bosses. Researchers recommended that leaders establish clear guardrails, discourage indiscriminate use of AI, and promote what they call "purposeful" adoption. Without that, they warn, the spread of workslop risks undermining both productivity and trust at work.
[13]
Companies Are Being Torn Apart by AI "Workslop," Stanford Research Finds
AI is supposed to revolutionize workforce productivity, but so far that hasn't been the case. One study from MIT found that a damning 95 percent of companies that gambled on integrating the tech saw no meaningful growth in revenue. Another study exploring one of its most hyped up applications, AI coding assistants, showed that programmers actually became slower when they depended on the AI tools. Meanwhile, a slew of reports tell an increasingly familiar tale of companies firing their workers to replace them with AI, only to scramble to rehire humans once they realize the tech isn't all it was made out to be. But why exactly is AI falling short in the workplace? In theory, shouldn't a tool that can generate essays on the fly, spit out code, hold down a conversation on any topic, and take notes on your behalf be amazing for the economy? A fascinating new report from researchers at Stanford and the firm BetterUp Labs explores that question. In a survey that's still ongoing, the team examined the responses of 1,150 full-time employees in the US across multiple industries to tease out how AI content is used in the workplace and how it affects the dynamics between employees. Their conclusion? People are using it to churn out busywork that needs to be fixed by a human with common sense, undercutting claims that it can boost productivity in the labor force. "Employees are using AI tools to create low-effort, passable looking work that ends up creating more work for their coworkers," wrote Kate Niederhoffer, a social psychologist and vice president of BetterUp, in a writeup for Harvard Business Review with her colleagues. The team calls this low quality work "workslop," in a play on "AI slop," the slang used for describing the shoddy AI text and imagery that pollute social media. They define "workslop" as AI-generated work that "masquerades as good work, but lacks the substance to meaningfully advance a given task." Sure, some employees can use AI to produce polished work. But many simply hit "enter" on their prompt and pass along whatever messy output an AI spits out -- because, on a very surface level, it does seem passable. "The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work," the team wrote. "In other words, it transfers the effort from creator to receiver." It's an evolution of "cognitive offloading," the term that psychologists use to describe outsourcing your thinking to a piece of technology, be it a calculator or a search engine. AI content, however, "uses machines to offload cognitive work to another human being," Niederhoffer and her team argue. According to the survey, 40 percent of employees say they've received workslop in the past month, with just over 15 percent of all the content they receive at work being AI-generated. Most of this, 40 percent, is from their peers -- but 16 percent of the time it comes from up the chain of command. Wherever it's originating from, the very presence of AI content creates a testy workplace dynamic, because "when coworkers receive workslop, they are often required to take on the burden of decoding the content, inferring missed or false context," the authors wrote. "It created a situation where I had to decide whether I would rewrite it myself, make him rewrite it, or just call it good enough," explained one survey respondent who works in finance. "I had to waste more time following up on the information and checking it with my own research," recalled another respondent who is a director in retail. "I then had to waste even more time setting up meetings with other supervisors to address the issue. Then I continued to waste my own time having to redo the work myself." The survey results also found that employees who received workslop made them think less of the colleague who sent it. In numbers, 54 percent of respondents said they viewed their AI-using colleague as less creative, 42 percent said they viewed them as less trustworthy, and 37 percent said they viewed them as less intelligent. "The most alarming cost may be interpersonal," Niederhoffer and her team wrote. Even if there are some limited applications where careful AI usage could boost productivity or polish without affecting quality, this nuance is at odds with how breathlessly and rapidly many business leaders are adopting the tech -- not to mention the deafening buzz coming out of the AI industry itself.
[14]
An anti-workslop workshop to save your employees from AI-created chaos and time wasting | Fortune
Research scientists have just issued a warning, of sorts, about a stealthy new threat to productivity across corporate America: Employees are creating and sharing time-wasting and reckless "workslop." The official description of workslop, per researchers from Stanford's Social Media Lab and BetterUp, an online coaching platform, is "AI-generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task." But, let's be honest, most office workers won't need a definition. We've all encountered examples of workslop in the wild. It's the memo jammed with stuffy words like "underscore" and "commendable" that leaves you scratching your head, or the report littered with em-dashes that, upon a close read, feels hollow. It's one thing to get a clumsy AI-created marketing email or solicitation from a vendor; it's another to get one from your colleague or boss. We used to complain about meetings that could have been an email; now we receive confusing workslop emails that require meetings to be decoded. Managers who shared workslop horror stories with the Stanford and BetterUp team also described redoing a direct report's project or sending it back for heavy revisions. So while companies may be spending hundreds of millions on AI software to create efficiencies and boost productivity, and encouraging employees to use it liberally, they may also be injecting friction into their operations. After surveying full-time employees at 1,150 companies, the researchers found that workslop is flowing in all directions inside firms. Mostly it spreads laterally between peers, but managers are also sending slop to their reports, and employees are filing it to their bosses. In total, 40% of respondents said they had received a specimen they'd define as workslop in the past month from a colleague. Does this mean companies should cut back on AI? Probably not. In a competitive marketplace, it's hard to ignore a technology that even the study authors say "can positively transform some aspects of work." What companies can do, however, is set up guardrails. They may even consider building an anti-workslop workshop for employees. Here's what it might include: By the way, you'd better schedule your anti-workslop workshop soon. The researchers say that "lazy" AI-generated work is not only slowing people down, it's also leading to employees losing respect for each other. After receiving workslop, staffers said they saw the peers behind it as less creative and less trustworthy.
[15]
Instead of improving productivity, AI is creating 'workslop'
Despite the now widespread use of AI in workplaces, workers aren't actually becoming more productive, according to a new survey led by Stanford Social Media Lab and BetterUp Labs. The report finds that while employees are using modern AI tools more than ever, they're using them to create subpar work. The new report calls the phenomenon "workslop," which it defines as "AI-generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task." In other words, it's thoughtless, sloppy work that someone will eventually have to clean up. The problem is widespread up and down the corporate ladder. Per the report, 40% of employees out of 1,150 surveyed said they've received workslop in the past month, and that about 15.4% of the work they receive overall meets the criteria for workslop. Most commonly, workslop is shared between peers (40% of the time), but it doesn't stop there: 18% of the time, workslop gets sent to managers. And it also happens in reverse: 16% of the time, managers (or even more senior leaders) send workslop out to their teams. The report says that two industries have been impacted the most: professional services and technology. But across all industries, the phenomenon is more than a minor annoyance. There's an emotional cost to receiving workslop. More than half of respondents (53%) said they feel annoyed, 38% confused, and 22% are downright offended when they receive workslop.
[16]
How AI 'Workslop' Will Hurt Your Profits
Meaningless AI-generated content, now widely known as "slop," is all but unavoidable at the moment. The padded prose and sometimes outright nonsense generated by AI tools affects job hiring, plagues social media, puffs up countless LinkedIn posts, and leaves some experts worrying about its impact on the future of the creative arts. Now a new term coined by researchers in a recent Harvard Business Review article sums up the latest problem created by AI puffery: "workslop." This stuff, which means worthless AI-made material of almost any kind you encounter while working, from emails to reports to webpages, may be using up valuable office hours for your staff as they try to work out whether the material they're looking at is valuable "signal" needed for their jobs, or merely distracting noise. The report says that in a survey over over 1,100 U.S. adults described as desk workers over 40 percent said they'd come across workslop in the last month alone. The report found it took them an average of 1 hour and 56 minutes for them to deal with each example, news site Axios notes. Researchers calculated the time wasted in relation to an average salary and calculated workslop incidents could cost companies $186 a month. Multiply this out across tens, hundreds or thousands of workers and the business costs of workslop become a substantial total. As well wasted work cycles, duplicated efforts and lost trust, workslop means organizations "lose time, are misled by false productivity and experience stalled AI adoption," the report noted. This AI-made material does more than drive up labor costs and add to wasted time, the social media and psychology team from Stanford University in partnership with San-Francisco-based leadership platform BetterUp concluded. Among people who'd encountered workslop, 38 percent said it made them feel confused, over half said they were annoyed and 22 percent went as far as noting they were offended. These negative reactions do nothing to promote good feelings among workers, like positive engagement or friendliness, which could further impact a company's productivity and profitability. That all sounds pretty bad, but here's the million-dollar question: where is all this slop coming from? Turns out it's not necessarily being generated by people outside of the company dumping partly distractingly silly, or meaningless but slick-looking content into peoples' inboxes. Some 18 percent of the respondents in the new survey who used AI admitted sending out AI-generated or partly AI-made content that was "unhelpful, low effort or low quality," Axios notes. This is happening despite the fact that half of the people surveyed said they view people who send out slop as less capable and creative coworkers. Workslop seems like a side effect of the rapid rise of AI tech, and lazy or time-pressed staff who cut together impressive looking but low-value material using AI tools, instead of actually researching, writing or calculating the correct material themselves. They then send it out without editing or checking its quality. Sorting out the resulting mess costs workers time and effort. The problem may only get worse. A new report from Google says 90 percent of all tech workers are using AI at work. That's up from last year's 14 percent figure. It's a dramatic rise that shows how far AI has already penetrated into the average tech-centric workplace. And where the techy businesses go, other industries and jobs often follow. What can you do to keep your workers from engaging in this kind of time-wasting AI shenanigans? BetterUp's report suggests that company leaders can set up clear AI-use guardrails, and even model examples of thoughtful, positive AI use when putting together content for work. Encouraging collaboration mindsets rather than an urge to avoid doing work will also help -- but this level of cultural change requires a considerable amount of effort. The report is also another sign that not all AI implementations are helpful in the workplace, and can result in wasted time. Choosing the right tools and educating your staff how to use them is key.
[17]
AI-Generated Workslop Is a $9 Million Productivity Problem, Say Stanford Researchers
A new research report has coined a new term for inadequate AI-generated content at work: workslop. The word refers to content that looks polished but lacks substance. It applies to AI-generated slideshows, lengthy reports, summaries, and code. While the content looks good on the surface, it ends up being incomplete, missing context, or unhelpful to the task at hand. And a new study released on Monday found that 40% of workers have reported receiving workslop in just the past month. "Rather than saving time, it leaves colleagues to do the real thinking and clean-up," the report reads. Related: 37% of Employers Would Rather Hire a Robot or AI Than a Recent Grad: 'Theory Alone Is No Longer Enough' To write the study, Stanford Social Media Lab researchers partnered with AI coaching platform BetterUp to conduct an online survey of 1,150 full-time U.S. desk workers this month. Employees who reported encountering workslop said that it caused them to take extra time and mental energy from their day to figure out how to appropriately address the work with the colleagues who had submitted it. Over half (53%) of respondents were "annoyed" to receive AI-generated work, and 22% were "offended." Close to half said they thought of their co-workers as "less creative and reliable" after they submitted the workslop. It also took an average of two hours to resolve each incident, making the invisible tax of workslop about $186 per month, based on the salaries the workers reported receiving. That means that the average annual cost of workslop for a 10,000-person organization is about $9 million per year, the study found. Related: Employers Say They Want to Hire Candidates With AI Skills, But Employees Are Still Sneaking AI Tool Use in the Office The difference between workslop and sloppy work is that workslop doesn't require any effort to create, while sloppy work still requires a little bit of effort, Stanford Professor of Communication and one of the authors of the study, Jeff Hancock, told CNBC. "Now that [the effort] piece is gone, I can generate a lot of useless or unproductive content very easily," Hancock told the outlet. Hancock recommended that business leaders give guidance to employees about when and how to appropriately use AI at work. Workers should be clear about when they're using AI, so colleagues aren't surprised by it, he said. Related: Almost 100% of Gen Zers Surveyed Admitted to Using AI Tools at Work. Here's Why They Say It Is a 'Catalyst' for Their Careers. Another study author and Vice President of BetterUp Labs, Kate Neiderhoffer, told CNBC that managers should give workers specific reasons for why they should use AI to complete certain tasks. They should offer clarity about the policies and training that go along with using AI, she added. AI can provide "incredible" use cases, Neiderhoffer told the outlet, but not when used in a "copy-and-paste mode" where you "just let the tool do all the work for you."
Share
Share
Copy Link
A new phenomenon called 'workslop' - AI-generated content lacking substance - is causing productivity losses and damaging workplace relationships. Research shows 40% of employees encounter workslop monthly, costing companies millions in wasted effort.
In an era where artificial intelligence (AI) promises to revolutionize productivity, a new phenomenon called 'workslop' is emerging, causing significant challenges in the workplace. Coined by researchers from Stanford Social Media Lab and BetterUp Labs, workslop refers to AI-generated work content that appears professional but lacks substance to meaningfully advance tasks
1
.Source: Axios
A recent survey revealed that 40% of US-based full-time employees reported receiving workslop in the past month
2
. This AI-generated content, which accounts for an average of 15.4% of received work, is primarily exchanged between peers (40%) and sometimes sent to managers by direct reports (18%)2
.Source: Quartz
The impact of workslop is significant:
3
.3
.2
.The technology and professional services sectors are disproportionately affected by workslop
1
. This trend raises questions about the actual productivity gains from AI in the workplace. A recent MIT study found that about 95% of organizations see no measurable return on investment from generative AI efforts3
.Source: Futurism
Related Stories
Jeff Hancock, a Stanford professor, noticed the issue when grading research assignments that looked 'not quite right' and lacked substantive content
4
. This highlights the importance of human oversight in AI-generated content.To combat workslop and ensure effective AI implementation, experts recommend:
5
As AI adoption continues to soar, with US employees using AI at least a few times a year nearly doubling in recent years, organizations must find a balance between leveraging AI's benefits and maintaining the quality and substance of work outputs
5
.Summarized by
Navi
[3]