5 Sources
5 Sources
[1]
AI 'workslop' is creating unnecessary extra work. Here's how we can stop it
University of Melbourne provides funding as a founding partner of The Conversation AU. Have you ever used artificial intelligence (AI) in your job without double-checking the quality or accuracy of its output? If so, you wouldn't be the only one. Our global research shows a staggering two-thirds (66%) of employees who use AI at work have relied on AI output without evaluating it. This can create a lot of extra work for others in identifying and correcting errors, not to mention reputational hits. Just this week, consulting firm Deloitte Australia formally apologised after a A$440,000 report prepared for the federal government had been found to contain multiple AI-generated errors. Against this backdrop, the term "workslop" has entered the conversation. Popularised in a recent Harvard Business Review article, it refers to AI-generated content that looks good but "lacks the substance to meaningfully advance a given task". Beyond wasting time, workslop also corrodes collaboration and trust. But AI use doesn't have to be this way. When applied to the right tasks, with appropriate human collaboration and oversight, AI can enhance performance. We all have a role to play in getting this right. The rise of AI-generated 'workslop' According to a recent survey reported in the Harvard Business Review article, 40% of US workers have received workslop from their peers in the past month. The survey's research team from BetterUp Labs and Stanford Social Media Lab found on average, each instance took recipients almost two hours to resolve, which they estimated would result in US$9 million (about A$13.8 million) per year in lost productivity for a 10,000-person firm. Those who had received workslop reported annoyance and confusion, with many perceiving the person who had sent it to them as less reliable, creative, and trustworthy. This mirrors prior findings that there can be trust penalties to using AI. Read more: Being honest about using AI at work makes people trust you less, research finds Invisible AI, visible costs These findings align with our own recent research on AI use at work. In a representative survey of 32,352 workers across 47 countries, we found complacent over-reliance on AI and covert use of the technology are common. While many employees in our study reported improvements in efficiency or innovation, more than a quarter said AI had increased workload, pressure, and time on mundane tasks. Half said they use AI instead of collaborating with colleagues, raising concerns that collaboration will suffer. Making matters worse, many employees hide their AI use; 61% avoided revealing when they had used AI and 55% passed off AI-generated material as their own. This lack of transparency makes it challenging to identify and correct AI-driven errors. What you can do to reduce workslop Without guidance, AI can generate low-value, error-prone work that creates busywork for others. So, how can we curb workslop to better realise AI's benefits? If you're an employee, three simple steps can help. start by asking, "Is AI the best way to do this task?". Our research suggests this is a question many users skip. If you can't explain or defend the output, don't use it if you proceed, verify and work with AI output like an editor; check facts, test code, and tailor output to the context and audience when the stakes are high, be transparent about how you used AI and what you checked to signal rigour and avoid being perceived as incompetent or untrustworthy. What employers can do For employers, investing in governance, AI literacy, and human-AI collaboration skills is key. Employers need to provide employees with clear guidelines and guardrails on effective use, spelling out when AI is and is not appropriate. That means forming an AI strategy, identifying where AI will have the highest value, being clear about who is responsible for what, and tracking outcomes. Done well, this reduces risk and downstream rework from workslop. Because workslop comes from how people use AI - not as an inevitable consequence of the tools themselves - governance only works when it shapes everyday behaviours. That requires organisations to build AI literacy alongside policies and controls. Organisations must work to close the AI literacy gap. Our research shows that AI literacy and training are associated with more critical AI engagement and fewer errors, yet less than half of employees report receiving any training or policy guidance. Employees need the skills to use AI selectively, accountably and collaboratively. Teaching them when to use AI, how to do so effectively and responsibly, and how to verify AI output before circulating it can reduce workslop.
[2]
AI 'workslop' is creating unnecessary extra work. Here's how we can stop it
Have you ever used artificial intelligence (AI) in your job without double-checking the quality or accuracy of its output? If so, you wouldn't be the only one. Our global research shows a staggering two-thirds (66%) of employees who use AI at work have relied on AI output without evaluating it. This can create a lot of extra work for others in identifying and correcting errors, not to mention reputational hits. Just this week, consulting firm Deloitte Australia formally apologized after a A$440,000 report prepared for the federal government had been found to contain multiple AI-generated errors. Against this backdrop, the term "workslop" has entered the conversation. Popularized in a recent Harvard Business Review article, it refers to AI-generated content that looks good but "lacks the substance to meaningfully advance a given task." Beyond wasting time, workslop also corrodes collaboration and trust. But AI use doesn't have to be this way. When applied to the right tasks, with appropriate human collaboration and oversight, AI can enhance performance. We all have a role to play in getting this right. The rise of AI-generated 'workslop' According to a recent survey reported in the Harvard Business Review article, 40% of US workers have received workslop from their peers in the past month. The survey's research team from BetterUp Labs and Stanford Social Media Lab found on average, each instance took recipients almost two hours to resolve, which they estimated would result in US$9 million (about A$13.8 million) per year in lost productivity for a 10,000-person firm. Those who had received workslop reported annoyance and confusion, with many perceiving the person who had sent it to them as less reliable, creative, and trustworthy. This mirrors prior findings that there can be trust penalties to using AI. Invisible AI, visible costs These findings align with our own recent research on AI use at work. In a representative survey of 32,352 workers across 47 countries, we found complacent over-reliance on AI and covert use of the technology are common. While many employees in our study reported improvements in efficiency or innovation, more than a quarter said AI had increased workload, pressure, and time on mundane tasks. Half said they use AI instead of collaborating with colleagues, raising concerns that collaboration will suffer. Making matters worse, many employees hide their AI use; 61% avoided revealing when they had used AI and 55% passed off AI-generated material as their own. This lack of transparency makes it challenging to identify and correct AI-driven errors. What you can do to reduce workslop Without guidance, AI can generate low-value, error-prone work that creates busywork for others. So, how can we curb workslop to better realize AI's benefits? If you're an employee, three simple steps can help. What employers can do For employers, investing in governance, AI literacy, and human-AI collaboration skills is key. Employers need to provide employees with clear guidelines and guardrails on effective use, spelling out when AI is and is not appropriate. That means forming an AI strategy, identifying where AI will have the highest value, being clear about who is responsible for what, and tracking outcomes. Done well, this reduces risk and downstream rework from workslop. Because workslop comes from how people use AI -- not as an inevitable consequence of the tools themselves -- governance only works when it shapes everyday behaviors. That requires organizations to build AI literacy alongside policies and controls. Organizations must work to close the AI literacy gap. Our research shows that AI literacy and training are associated with more critical AI engagement and fewer errors, yet less than half of employees report receiving any training or policy guidance. Employees need the skills to use AI selectively, accountably and collaboratively. Teaching them when to use AI, how to do so effectively and responsibly, and how to verify AI output before circulating it can reduce workslop. This article is republished from The Conversation under a Creative Commons license. Read the original article.
[3]
How employers can prevent AI 'work slop'
Describing work as slop and sludge is not the ideal feedback. But the terms are a warning to employers of the risks and limitations of content generated by artificial intelligence. "Work slop is a new form of automated sludge in organisations," says André Spicer, author and dean of Bayes business school. "While old forms of bureaucratic sludge like meetings or lengthy reports took time to produce, this new form of sludge is quick and cheap to produce in vast quantities. What is expensive is wading through it." Many executives are championing new AI tools that help their staff to synthesise research, articulate ideas, produce documents and save time -- but at times the technology may be doing the opposite. Deloitte this month announced it would be partially refunding the Australian government for a report it produced that contained mistakes made by AI, demonstrating the risks for professional service companies. The potential harm is not only external -- to corporate reputations -- but also internal, as poor AI generated content can result in bloated reports with mangled meanings and excessive verbiage, creating extra work for colleagues to decipher. While AI significantly decreases the effort to put pitches and proposals together, it does not "equally decrease the costs of processing this information", adds Spicer. Michael Eiden, managing director at Alvarez & Marsal's digital technology services, says: "The accessibility of generative AI has made it easier than ever to produce work quickly -- but not necessarily to the highest standard." A recent report by Better Up, the coaching platform and Stanford Social Media Lab, found that on average, US desk-based employees estimate 15 per cent of the work they receive is AI work slop. The emerging problem heightens the need for clear policies and increased monitoring of AI's use, as well as staff training. The Financial Reporting Council, the UK accountancy regulator, warned in the summer that the Big Four firms were failing to monitor how automated tools and AI affected the quality of their audits, even as firms escalate their use of the technology to perform risk assessments and obtain evidence. Last week, one of the professional accountancy body issued a report on AI's ethical threats -- such as fairness, bias and discrimination -- to finance professionals. Meanwhile, the UK High Court has called for the legal profession to be vigilant after two cases in which lawyers were thought to have used AI included written legal arguments and witness statements containing false information, "typically a fake citation or quotation". "Firms shouldn't simply hand employees these tools without guidance," says Eiden. "They need to clearly define what good looks like." A&M is developing practical examples and prompt guides to help staff use AI responsibly and effectively. "For high-stakes work", says Eiden, "human review remains non-negotiable -- the technology can assist, but it should never be the final author." James Osborn, group chief digital officer at KPMG UK and Switzerland, agrees, stressing the importance not just of staff verifying the accuracy of the content but also "suitable governance processes" to ensure the technology is being used appropriately. It is not just AI's ability to help with the substance of employees' work that is under scrutiny, but also administrative tasks, including scheduling meetings and taking notes, according to a report by Asana. It highlighted workers' complaints of AI agents sending false information and forcing teams to redo tasks, adding to their workload. Where employers are not setting out a clear policy on AI's use in the workplace, staff may use it on the sly. A report by Capgemini this year found that 63 per cent of software developers were using unauthorised tools, which have serious ethical and security implications, such as sharing company data. It is not only ethics and errors that are a problem but the demands on staff to identify and fix "work slop", a term coined this month by researchers to describe "AI-generated work content that masquerades as good work but lacks the substance to meaningfully advance a given task". Resulting content can be "unhelpful, incomplete, or missing crucial context about the project at hand", they wrote in a piece published in the Harvard Business Review. This means the receiver may have to "interpret, correct, or redo the work". Kate Niederhoffer, social psychologist and vice-president at BetterUp Labs, a research arm of the coaching service, and one of the report's authors, insists employees are not creating work slop for "nefarious" reasons but typically because they "have so much work to do". Dividing users broadly into two mindsets, she describes "pilots" as those who are curious about AI, using it to augment their capabilities rather than replace them, and "passengers" who are begrudging, burdened by work, and who use AI to buy themselves more time. "One of the reasons people are creating work slop may be the result of too few people, everything feeling urgent and important." Niederhoffer urges managers to give staff support, and to be clear about the likely effect of poor work on colleagues. Clarity about the purpose and use of AI is key, says Joe Hildebrand, managing director of talent and organisation at Accenture. "When you clearly understand the tangible and specific value AI can bring to your context, you are better able to design and deploy tools that create meaningful impact, not just noise." Mark Hoffman, head of Asana's Work Innovation Lab, advocates four core foundations to AI use, starting with guidelines that balance legal, IT and security concerns with practical business needs. He also recommends training that goes beyond the technical skills of prompt writing to teach softer delegation skills; accountability rules that clarify who is responsible when things go wrong; and quality control standards that prioritise accuracy and error tracking alongside efficiency. "The goal is not to just figure out what behaviours to prevent, but what behaviours to empower and enable." Hildebrande stresses the importance of "reversibility". "Every AI deployment should include a human override or kill switch. Monitoring how often humans reverse AI decisions and using those insights to improve the system can enhance trust." As AI increasingly automates work processes, manual input will become increasingly critical, some experts say. Spicer observes that more universities are asking students to take a written exam or a verbal presentation, instead of an electronic submission. "It is likely firms will increasingly rely on analogue input and processes to make high-stakes decisions." Stuart Mills, assistant professor of economics at Leeds University, believes managers have become swept up by "the excitement of AI and immediateness of the results" and distracted from "asking big questions about organisations and productivity." The tendency is to measure knowledge work output by lines of code or numbers of reports, he says, which can create "an illusion of productivity". He suggests: "Managers need to ask, 'What do we do to create value? And can we use AI in our current structure, or do we need to change our structure?' I don't see those questions being asked."
[4]
Is 'AI workslop' creating unnecessary work for employees?
Steven Lockey and Nicole Gillespie of the Melbourne Business School and University of Melbourne discuss how poorly deployed AI can create additional work for users. A version of this article was originally published by The Conversation (CC BY-ND 4.0) Have you ever used artificial intelligence (AI) in your job without double-checking the quality or accuracy of its output? If so, you wouldn't be the only one. Our global research shows a staggering two-thirds (66pc) of employees who use AI at work have relied on AI output without evaluating it. This can create a lot of extra work for others in identifying and correcting errors, not to mention reputational hits. Just this week, consulting firm Deloitte Australia formally apologised after a A$440,000 report prepared for the federal government had been found to contain multiple AI-generated errors. Against this backdrop, the term "workslop" has entered the conversation. Popularised in a recent Harvard Business Review article, it refers to AI-generated content that looks good but "lacks the substance to meaningfully advance a given task". Beyond wasting time, workslop also corrodes collaboration and trust. But AI use doesn't have to be this way. When applied to the right tasks, with appropriate human collaboration and oversight, AI can enhance performance. We all have a role to play in getting this right. The rise of AI-generated 'workslop' According to a recent survey reported in the Harvard Business Review article, 40pc of US workers have received workslop from their peers in the past month. The survey's research team from BetterUp Labs and Stanford Social Media Lab found on average, each instance took recipients almost two hours to resolve, which they estimated would result in US$9 million (about A$13.8 million) per year in lost productivity for a 10,000-person firm. Those who had received workslop reported annoyance and confusion, with many perceiving the person who had sent it to them as less reliable, creative, and trustworthy. This mirrors prior findings that there can be trust penalties to using AI. Invisible AI, visible costs These findings align with our own recent research on AI use at work. In a representative survey of 32,352 workers across 47 countries, we found complacent over-reliance on AI and covert use of the technology are common. While many employees in our study reported improvements in efficiency or innovation, more than a quarter said AI had increased workload, pressure, and time on mundane tasks. Half said they use AI instead of collaborating with colleagues, raising concerns that collaboration will suffer. Making matters worse, many employees hide their AI use; 61pc avoided revealing when they had used AI and 55pc passed off AI-generated material as their own. This lack of transparency makes it challenging to identify and correct AI-driven errors. What you can do to reduce workslop Without guidance, AI can generate low-value, error-prone work that creates busywork for others. So, how can we curb workslop to better realise AI's benefits? Build a future you believe in. Join our team Think for impact with Liberty IT. Delivering global software solutions Make work more human Informing, entertaining and connecting the world If you're an employee, three simple steps can help. Start by asking, "Is AI the best way to do this task?". Our research suggests this is a question many users skip. If you can't explain or defend the output, don't use it If you proceed, verify and work with AI output like an editor; check facts, test code, and tailor output to the context and audience When the stakes are high, be transparent about how you used AI and what you checked to signal rigour and avoid being perceived as incompetent or untrustworthy What employers can do For employers, investing in governance, AI literacy, and human-AI collaboration skills is key. Employers need to provide employees with clear guidelines and guardrails on effective use, spelling out when AI is and is not appropriate. That means forming an AI strategy, identifying where AI will have the highest value, being clear about who is responsible for what, and tracking outcomes. Done well, this reduces risk and downstream rework from workslop. Because workslop comes from how people use AI - not as an inevitable consequence of the tools themselves - governance only works when it shapes everyday behaviours. That requires organisations to build AI literacy alongside policies and controls. Organisations must work to close the AI literacy gap. Our research shows that AI literacy and training are associated with more critical AI engagement and fewer errors, yet less than half of employees report receiving any training or policy guidance. Employees need the skills to use AI selectively, accountably and collaboratively. Teaching them when to use AI, how to do so effectively and responsibly, and how to verify AI output before circulating it can reduce workslop. By Steven Lockey and Nicole Gillespie Steven Lockey is a post-doctoral research fellow at the Melbourne Business School. He is a trust researcher currently investigating trust in artificial intelligence. He is also interested in organisational trust and trust repair, and has previously worked with police forces in England and Wales, investigating topics such as wellbeing in policing. Nicole Gillespie is the chair of trust and prof of management at the University of Melbourne. She is a leading international authority on trust in organisations, a fellow of the Academy of Social Sciences in Australia and an international research fellow at the Centre for Corporate Reputation at the University of Oxford. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[5]
How to avoid 'workslop' and other AI pitfalls
Following my response to a reader who's resisting a push to adopt artificial intelligence tools at work, readers shared their thoughts and experiences -- pro, con and resigned -- on using AI. The consensus was that some interaction with AI is unavoidable for anyone who works with technology, and that refusing to engage with it -- even for principled reasons, such as the environmental harm it causes -- could be career-limiting. But there's reason to believe that generative AI in the office may not be living up to its fundamental value proposition of making us more productive. A September article in Harvard Business Review (free registration required) warns that indiscriminate AI use can result in what the article dubs "workslop": "AI-generated work content that masquerades as good work but lacks the substance to meaningfully advance a given task." Examples of workslop include AI-generated reports, code and emails that take more time to correct and decipher than if they had been created from scratch by a human. They're destructive and wasteful -- not only of water or electricity, but of people's time, productivity and goodwill. "The insidious effect of workslop is that it shifts the burden of the work downstream," the HBR researchers said. Of course, workslop existed before AI. We've all had our time wasted and productivity bogged down by people who dominate meetings talking about nothing, send rambling emails without reviewing them for clarity or pass halfhearted work down the line for someone else to fix. AI just allows them to do more of it, faster. And just like disinformation, once workslop enters the system, it risks polluting the pool of knowledge everyone draws from. In addition to the literal environment, AI workslop can also damage the workplace environment. The HBR researchers found that receiving workslop caused approximately half of recipients to view the sender as "less creative, capable and reliable" -- even less trustworthy or intelligent. But, as mentioned above, it's probably not wise -- or feasible -- to avoid using AI. "AI is embedded in your everyday tasks, from your email client, grammar checkers, type-ahead, social media clients suggesting the next emoji," said Dean Grant from Port Angeles, Wash., whose technology career has spanned 50 years. The proper question, he said, is not how to avoid using it, but what it can do for you and how it can give you a competitive advantage. But even readers who said they use AI appropriately acknowledged its flaws and limitations, including that its implementation sometimes takes more effort than simply performing the task themselves. "[H]ow much time should I spend trying to get the AI to work? If I can do the task [without AI] in an hour, should I spend 30 minutes fumbling with the artificial stupid?" asked Matt Deter of Rocklin, Calif. "At what point should I cut my losses?" So it seems an unwinnable struggle. If you can't avoid or opt out of AI altogether, how do you make sure you're not just adding to the workslop, generating resentment and killing productivity? Don't make AI a solution in search of a problem. This one's for the leaders. Noting that "indiscriminate imperatives yield indiscriminate usage," the HBR article urges leaders encouraging AI use to provide guidelines for using it "in ways that best align to the organization's strategy, values, and vision." As with return-to-office mandates, if leaders can articulate a purpose, and workers have autonomy to push back when the mandate doesn't meet that purpose, the result is more likely to add value. Don't let AI have the last word. Generating a raw summary of a meeting for your own reference is one thing; if you're sharing it with someone else, take the time to trim the irrelevant portions, highlight the important items, and add context where needed. If you use AI to generate ideas, take time to identify the best ones and shape them to your needs. Be transparent about using AI. If you're worried about being judged for using AI, just know that the judgment will be even harsher if you try to pass it off as your own work, or if you knowingly pass along unvetted information with no warning. Weigh convenience against conservation. If we can get in the habit of separating recyclables and programming thermostats, we can be equally mindful about our AI usage. An AI-generated 100-word email uses the equivalent of a single-use bottle of water to cool and power the data centers processing that query. Knowing that, do you need a transcript of every meeting you attend, or are you requesting one out of habit? Do you need ChatGPT to draft an email, or can you get results just as quickly over the phone? (Note to platform and software developers: Providing a giant, easy-to-find AI "off" switch wouldn't hurt.) Step out of the loop once in a while. Try an AI detox every so often where you do your job without it, just to keep your brain limber. "I can't deny how useful [AI has] been for research, brainstorming and managing workloads," said Danial Qureshi, who runs a virtual marketing and social media management agency in Islamabad, Pakistan. "But lately, I've also started to feel like we're losing something important -- our own creativity. Because we rely on AI so much now, I've noticed we don't spend as much time thinking or exploring original ideas from scratch." Artificial intelligence may be a fact of modern life, but there's still nothing like the real thing.
Share
Share
Copy Link
A new phenomenon called 'workslop' is emerging as AI tools become more prevalent in the workplace, leading to unnecessary extra work and eroding trust among colleagues. Experts suggest ways for both employees and employers to mitigate these issues.
As artificial intelligence (AI) tools become increasingly prevalent in the workplace, a new phenomenon called 'workslop' is emerging, creating unnecessary extra work and eroding trust among colleagues. The term, popularized in a recent Harvard Business Review article, refers to AI-generated content that appears professional but 'lacks the substance to meaningfully advance a given task'
1
.
Source: Silicon Republic
A global study reveals that a staggering 66% of employees who use AI at work have relied on AI output without evaluating it
2
. This complacent over-reliance on AI can lead to errors, reputational damage, and increased workload for others who must identify and correct mistakes.The impact of workslop extends beyond mere inconvenience. A survey by BetterUp Labs and Stanford Social Media Lab found that 40% of US workers have received workslop from peers in the past month, with each instance taking nearly two hours to resolve
3
. For a 10,000-person firm, this could result in approximately $9 million per year in lost productivity.
Source: Financial Times News
Moreover, workslop can damage workplace relationships. Recipients of AI-generated content often report feeling annoyed and confused, perceiving the sender as less reliable, creative, and trustworthy
1
. This erosion of trust can have long-lasting effects on team dynamics and collaboration.The indiscriminate use of AI tools also raises environmental concerns. Each AI-generated 100-word email consumes the equivalent energy of a single-use water bottle to cool and power data centers
5
. This hidden environmental cost adds to the ethical considerations surrounding AI adoption in the workplace.
Source: The Conversation
To address these challenges, experts recommend several strategies for both employees and employers:
1
Related Stories
4
While AI has the potential to enhance productivity and innovation, its implementation requires careful consideration and management. As Dean Grant, a technology veteran, notes, 'AI is embedded in your everyday tasks... The proper question is not how to avoid using it, but what it can do for you and how it can give you a competitive advantage'
5
.By addressing the challenges of workslop head-on, organizations can harness the benefits of AI while mitigating its risks, ultimately creating a more efficient and trustworthy work environment.🟡 communicated to you by the user.
Summarized by
Navi
[1]
[3]
[4]
[5]