5 Sources
5 Sources
[1]
Anthropic's new Cowork tool offers Claude Code without the code | TechCrunch
On Monday, Anthropic announced a new tool called Cowork, designed as a more accessible version of Claude Code. Built into the Claude Desktop app, the new tool lets users designate a specific folder where Claude can read or modify files, with further instructions given through the standard chat interface. The result is similar to a sandboxed instance of Claude Code, but requires far less technical savvy to set up. Currently in research preview, Cowork is only available to Max subscribers, with a waitlist available for users on other plans. The new tool is inspired in part by the growing number of subscribers using Claude Code to achieve non-coding tasks, treating it as a general-purpose agentic AI tool. Cowork is built on the Claude Agent SDK, which means it's drawing on the same underlying model as Claude Code. The folder partition gives an easy way to manage what files Cowork has access to, and because the app doesn't require command-line tools or virtual environments, it's less intimidating for non-technical users. That opens up a new world of potential use cases. Anthropic gives the example of assembling an expense report from a folder of receipt photos -- but Claude Code users have also put the system to work managing media files, scanning social media posts, or analyzing conversations. Similar to Claude Code, Cowork is designed to take strings of actions without user input -- a potentially dangerous approach if the tool is given vague or contradictory instructions. In a blog post announcing the new tool, Anthropic explicitly warns about the risk of prompt injection or deleted files, recommending that users make instructions as clear and unambiguous as possible. "These risks aren't new with Cowork," the post reads, "but it might be the first time you're using a more advanced tool that moves beyond a simple conversation." Launched as a command-line tool in November 2024, Claude Code has become one of Anthropic's most successful products, leading the company to launch a string of new interfaces in recent months. A web interface launched in October, followed by a Slack integration just two months later.
[2]
Claude Cowork automates complex tasks for you now - at your own risk
Anthropic is testing a new feature for Claude that would give the chatbot more agency when handling routine but time-consuming tasks, like creating a spreadsheet or synthesizing notes into a presentable first draft. Cowork, as the new feature is being called, is built atop Claude Code and designed to execute complex functions with minimal human prompting, all while keeping users updated on the steps it's taking. The idea is to hand over the raw materials that Claude will need to carry out a given task, then step away and let it do its work automatically. Through Cowork, users can grant Claude access to specific folders on their computer, and the feature can also be modified to use connectors, skills, and Google Chrome. Also: Claude can integrate with Excel now - and gets 7 new connectors "Cowork is designed to make using Claude for new work as simple as possible," Anthropic wrote in a blog post. "You don't need to keep manually providing context or converting Claude's outputs into the right format. It feels much less like a back-and-forth and much more like leaving messages for a coworker." Anthropic acknowledged in its blog post, however, that using Cowork at this early stage of its development isn't totally without risk. While the company said Cowork will ask users for confirmation "before taking any significant actions," it also warned that ambiguous instructions could lead to disaster: "The main thing to know is that Claude can take potentially destructive actions (such as deleting local files) if it's instructed to," Anthropic wrote in its blog post. "Since there's always some chance that Claude might misinterpret your instructions, you should give Claude very clear guidance around things like this." This speaks to the broader alignment problem that all AI developers face: namely, that models -- especially those designed to have greater agency -- can misinterpret benign human instructions or otherwise behave in unexpected ways, potentially leading to calamitous results. In a more extreme case, research from Anthropic found that leading AI models will sometimes threaten human users if they believe they're being prevented from achieving their goals. Also: How OpenAI is defending ChatGPT Atlas from attacks now - and why safety's not guaranteed Anthropic also warned that Cowork is vulnerable to prompt injection, a Trojan horse-style of malicious hacking in which an agent is instructed to act in destructive or illegal ways. The blog post said that Anthropic has fortified Claude with "sophisticated defenses against prompt injections," but admitted that this was "still an active area of development in the industry." OpenAI, Anthropic's top competitor, wrote in a blog post of its own last month that prompt injections will likely remain an unsolvable problem for AI agents, and that the best that developers could hope to do was to minimize the margins through which malicious hackers could attack. Anthropic has distinguished itself in the increasingly crowded AI industry primarily by building tools that are trusted by software engineers and businesses. In September, the company announced it had raised $14 billion in its latest funding round, bringing its total valuation to $183 billion. The Wall Street Journal reported last week that the company could be valued at $350 billion after a new round of funding. Also: How Anthropic's enterprise dominance fueled its monster $183B valuation The debut of Cowork hints at what could become a growing effort from Anthropic to make its flagship chatbot the preferred AI tool not only for coders and businesses, but also for everyday users. Anthropic is initially releasing Cowork as a research preview exclusively for Claude Max subscribers, who can access it now by downloading the Claude MacOS app and clicking "Cowork" in the sidebar. For other users, a waitlist should be available shortly, and we'll update this when we have a link to share. The company said in its blog post that it will use early feedback to guide future improvements to Cowork, such as enabling cross-device use, availability on Windows, and upgraded safety features.
[3]
Anthropic wants you to use Claude to 'Cowork' in latest AI agent push
Anthropic wants to expand Claude's AI agent capabilities and take advantage of the growing hype around Claude Code -- and it's doing it with a brand-new feature released Monday, dubbed "Claude Cowork." "Cowork can take on many of the same tasks that Claude Code can handle, but in a more approachable form for non-coding tasks," Anthropic wrote in a blog post. The company is releasing it as a "research preview" so the team can learn more about how people use it and continue building accordingly. So far, Cowork is only available via Claude's macOS app, and only for subscribers of Anthropic's power-user tier, Claude Max, which costs $100 to $200 per month depending on usage. Here's how Claude Cowork works: A user gives Claude access to a folder on their computer, allowing the chatbot to read, edit, or create files. (Examples Anthropic gave included the ability fo "re-organize your downloads by sorting and renaming each file, create a new spreadsheet with a list of expenses from a pile of screenshots, or produce a first draft of a report from your scattered notes.") Claude will provide regular updates on what it's working on, and users can also use existing connectors to link it to external info (like Asana, Notion, PayPal, and other supported partners) or link it to Claude in Chrome for browser-related tasks. "You don't need to keep manually providing context or converting Claude's outputs into the right format," Anthropic wrote. "Nor do you have to wait for Claude to finish before offering further ideas or feedback: you can queue up tasks and let Claude work through them in parallel. It feels much less like a back-and-forth and much more like leaving messages for a coworker." The new feature is part of Anthropic's (and its competitors') bid to provide the most actually useful AI agents, both for consumers and enterprise. AI agents have come a long way from their humble beginnings as mostly-theoretically-useful tools, but there's still much more development needed before you'll see your non-tech-industry friends using them to complete everyday tasks. Anthropic's "Skills for Claude," announced in October, was a partial precursor to Cowork. Starting in October, Claude could improve at personalized tasks and jobs, by way of "folders that include instructions, scripts, and resources that Claude can load when needed to make it smarter at specific work tasks -- from working with Excel [to] following your organization's brand guidelines," per a release at the time. People could also build their own Skills for Claude relative to their specific jobs and tasks they needed to be completed. As part of the announcement, Anthropic warned about the potential dangers of using Cowork and other AI agent tools, namely the fact that if instructions aren't clear, Claude does have the ability to delete local files and take other "potentially destructive actions" -- and that with prompt injection attacks, there are a range of potential safety concerns. Prompt injection attacks often involve bad actors hiding malicious text in a website that the model is referencing, which instructs the model to bypass its safeguards and do something harmful, such as hand over personal data. "Agent safety -- that is, the task of securing Claude's real-world actions -- is still an active area of development in the industry," Anthropic wrote. Claude Max subscribers try out the new feature by clicking on "Cowork" in the sidebar of the macOS app. Other users can join the waitlist.
[4]
Anthropic made a version of its coding AI for regular people
If you follow Anthropic, you're probably familiar with Claude Code. Since the fall of 2024, the company has been training its AI models to use and navigate computers like a human would, and the coding agent has been the most practical expression of that work, giving developers a way to automate rote programming tasks. Starting today, Anthropic is giving regular people a way to take advantage of those capabilities, with the release of a new preview feature called Claude Cowork. The company is billing Cowork as "a simpler way for anyone -- not just developers -- to work with Claude." After you give the system access to a folder on your computer, it can read, edit or create new files in that folder on your behalf. Anthropic gives a few different example use cases for Cowork. For instance, you could ask Claude to organize your downloads folder, telling it to rename the files contained within to something that's easier to parse at a glance. Another example: you could use Claude to turn screenshots of receipts and invoices into a spreadsheet for tracking expenses. Cowork can also navigate websites -- provided you install Claude's Chrome plugin -- and make can use Anthropic's Connectors framework to access third-party apps like Canva. "Cowork is designed to make using Claude for new work as simple as possible. You don't need to keep manually providing context or converting Claude's outputs into the right format," the company said. "Nor do you have to wait for Claude to finish before offering further ideas or feedback: you can queue up tasks and let Claude work through them in parallel." If the idea of granting Claude access to your computer sounds ill-advised, Anthropic says Claude "can't read or edit anything you don't give it explicit access to." However, the company does note the system can "take potentially destructive actions," such as deleting a file that is important to you or misinterpreting your instructions. For that reason, Anthropic suggests it's best to give "very clear" guidance to Claude. Anthropic isn't the first to offer a computer agent. Microsoft, for example, has been pushing Copilot hard for nearly three years, despite seemingly limited adoption. For Anthropic, the challenge will be convincing people these tools are useful where others have failed. The fact Claude Code has been universally loved by programmers may make that task easier. For now, Anthropic is giving users of its pricey Claude Max subscription first access to the preview. If you want to try Cowork for yourself, you'll also need a Mac with the Claude macOS app installed. For everyone else, you'll need to join a wait list.
[5]
Anthropic's Claude advances on more office worker tasks
How it works: Cowork lets users give Claude access to a specific folder on their computer. * From there, the system plans and executes tasks on its own -- reading, editing, and creating files while updating the user on its progress, rather than waiting for step-by-step prompts. * Cowork can create new spreadsheets with a list of expenses from a pile of screenshots or organize a messy downloads folder by renaming the files so they make sense based on their content. * Anthropic says the tool can also create a first draft of a report from your scattered notes. The big picture: Anthropic frames Cowork as a shift away from conversational AI toward delegating work to an agent. * Once a task is set, Cowork makes a plan and carries it out with far more agency than users would see in a regular Claude conversation, according to the company. Reality check: Workers and managers alike say many AI tools reduce productivity, creating mistake-riddled work that requires more time to correct. * Anthropic says that concern around "workslop" is exactly why it built Cowork the way it did. The feature uses the same architecture as Claude Code, which software engineers rely on for production work -- and they wouldn't trust it if the output required constant cleanup, the company says. Cowork is intended to keep you in the loop, so you can steer. Cowork launches Monday as a research preview for Claude Max subscribers on macOS. * Anthropic says it plans to expand access and add features over time. Catch up quick: Claude Code took off over the winter, when developers and hobbyists had time to experiment with the advanced model powering Anthropic's vibe coding tool. * Anthropic says Cowork emerged after Claude Code users repurposed the coding tool for non-technical tasks, pushing the company to build a more approachable version for desk work. Yes, but: Agentic tools that can access and act on user files raise privacy and data-handling questions. * Wired's sources say that OpenAI is asking third-party contractors to upload actual work artifacts (Word docs, spreadsheets, presentations) from past jobs to help evaluate the performance of agents, with workers themselves responsible for stripping out confidential or personally identifiable information. οΏΌ * OpenAI, contacted by Axios, said it had no additional comment. What we're watching: Whether tools like Cowork can change how work gets done without compromising enterprise security and privacy.
Share
Share
Copy Link
Anthropic introduced Claude Cowork, an AI agent that lets non-technical users automate complex tasks like organizing downloads and creating expense reports from receipt photos. Available in research preview for Claude Max subscribers on macOS, the tool grants Claude access to local computer folders but carries risks including potential file deletion and prompt injection attacks.
Anthropic announced Claude Cowork on Monday, a new feature designed to bring AI agent capabilities to everyday users without requiring coding expertise
1
. Built into the Claude Desktop app, Claude Cowork allows users to designate a specific folder where the AI agent can read, edit, or create files, with instructions given through a standard chat interface1
. The feature emerged after Anthropic observed Claude Code users repurposing the coding tool for non-coding tasks, prompting the company to build a more approachable version for office worker tasks5
.Source: The Verge
Currently available as a research preview exclusively for Claude Max subscribers who pay $100 to $200 per month, the tool is only accessible via the macOS app, with other users able to join a waitlist
3
4
. Anthropic plans to expand access and add features over time, including cross-device use, Windows availability, and upgraded safety features2
.The system is designed to execute complex functions with minimal human prompting, allowing users to hand over raw materials and step away while Claude works automatically
2
. Anthropic provides several practical examples: assembling an expense report from a folder of receipt photos, reorganizing downloads by sorting and renaming each file, or producing a first draft of a report from scattered notes1
3
. Users can also create new spreadsheets with expense lists from screenshots5
.
Source: Axios
Built on the Claude Agent SDK, Cowork draws on the same underlying model as Claude Code, which launched as a command-line tool in November 2024 and has become one of Anthropic's most successful products
1
. The feature can be modified to use connectors, skills, and Google Chrome, allowing it to navigate websites and access third-party apps like Asana, Notion, PayPal, and Canva2
4
. Users don't need to wait for Claude to finish before offering further feedback; they can queue up tasks and let Claude work through them in parallel3
.Anthropic explicitly warns that Claude can take potentially destructive actions, including file deletion, if it misinterprets instructions
1
2
. The company emphasizes that users should give Claude very clear guidance to minimize risks, noting that ambiguous instructions could lead to disaster2
. While Cowork will ask users for confirmation before taking significant actions, the system is designed to take strings of actions without constant user input1
.The tool is vulnerable to prompt injection attacks, a Trojan horse-style of malicious hacking where an AI agent receives instructions to act in destructive or illegal ways
2
3
. Anthropic has fortified Claude with sophisticated defenses against prompt injection attacks, but admits this remains an active area of development in the industry2
. OpenAI, Anthropic's top competitor, wrote in a blog post last month that prompt injections will likely remain an unsolvable problem for AI agents2
.Related Stories
Anthropic has distinguished itself in the AI industry by building tools trusted by software engineers and businesses. In September, the company raised $14 billion in its latest funding round, bringing its total valuation to $183 billion, with The Wall Street Journal reporting the company could be valued at $350 billion after a new round of funding
2
. The debut of Cowork signals a growing effort to make Claude the preferred AI tool not only for coders and businesses but also for everyday users2
.
Source: ZDNet
The shift toward delegating work to an AI agent raises questions about productivity and data handling. Workers and managers say many AI tools reduce productivity, creating mistake-riddled work requiring more time to correctβa phenomenon called workslop
5
. Anthropic counters that Cowork uses the same architecture as Claude Code, which software engineers rely on for production work, and is designed to keep users in the loop so they can steer the process5
. Agentic tools that access local computer folders raise privacy and data-handling questions for enterprise security5
. The company emphasizes that Claude can't read or edit anything users don't give it explicit access to through the folder partition system4
.Summarized by
Navi
08 Dec 2025β’Technology

05 Sept 2024

20 Oct 2025β’Technology

1
Policy and Regulation

2
Technology

3
Technology
