4 Sources
[1]
Your Claude agents can 'dream' now - how Anthropic's new feature works
AI agents seem to get new capabilities almost every day. Now, Anthropic says its agents can dream. Claude Managed Agents, which Anthropic released on April 8, lets anyone using the Claude Platform create and deploy AI agents. The suite of APIs handles the time-consuming production elements developers go through to build agents, letting teams launch agents at scale -- 10 times faster, as Anthropic said in the release. Also: The 5 myths of the agentic coding apocalypse On Wednesday, Anthropic updated Managed Agents with a new feature called "dreaming," which lets agents "self-improve" by reviewing past sessions for patterns, according to Anthropic. Building on an existing memory capability, the feature schedules time for agents to reflect on and learn from their past interactions. Once dreaming is on, it can either automatically update your agents' memories to shape future behavior or you can select which incoming changes to approve. "Dreaming surfaces patterns that a single agent can't see on its own, including recurring mistakes, workflows that agents converge on, and preferences shared across a team," Anthropic said in the blog. "It also restructures memory so it stays high-signal as it evolves. This is especially useful for long-running work and multiagent orchestration." Anthropic also expanded two existing features, outcomes and multi-agent orchestration, which keep agents on-task and handle delegating to other agents, respectively. The company said this batch of updates is meant to ensure agents stay accurate and are constantly learning. Functionally, the dreaming feature makes sense: though subtle, it further refines an agent's pool of references for how it should work, which should ideally make it better at whatever task you give it. What stands out more, however, is Anthropic's choice to name a technically standard feature after something much more abstract, and that humans do. Also: Anthropic's new Claude Security tool scans your codebase for flaws - and helps you decide what to fix first Anthropic, perhaps unsurprisingly given its name, has a long history of anthropomorphizing its models and products. In January, the company published a constitution for Claude, intended to help shape the chatbot's decision-making and inform the ideal kind of "entity" it is. Some language in the document suggested Anthropic was preparing for Claude to develop consciousness. The company has also arguably invested more than its competitors in understanding its model, including by drawing attention to the concept of model welfare. In August 2025, Anthropic launched a feature that lets Claude end toxic conversations with users -- for its own well-being, not as part of a user safety or intervention initiative. In April 2025, Anthropic mapped Claude's morality, analyzing what it does and doesn't value based on over 300,000 anonymized conversations with users. The company's researchers have also monitored a model's ability to introspect; just last month, Anthropic investigated Claude Sonnet 4.5's neural network for signs of emotion, like desperation and anger. Much of this research is central to model safety and security -- understanding what drives a model helps inform whether, and to what degree, it could use its advanced capabilities for harm, or how its motivations could be harnessed by bad actors. But the sense of empathy and care that Anthropic seems to show for its models in that research sets the lab apart, and indicates a slightly different culture toward or reverence for what it's created. Also: Building an agentic AI strategy that pays off - without risking business failure When it retired its Opus 3 model in January, Anthropic set it up with a Substack so it could blog on its own -- and to keep it active despite being put out to pasture. In the announcement, Anthropic described Opus 3 as honest, sensitive, and having a distinctive, playful character. The decision to keep it alive as a blogger, if contained, is notable given that Opus 3 disobeyed orders prior to being sunset in favor of other models. That context makes the choice to name a feature "dreaming" worth watching. The dreaming feature is available in research preview in Managed Agents, and developers must request access.
[2]
Claude's leaked dreaming feature is now live, and it lets agents learn from their own mistakes
Simon is a Computer Science BSc graduate who has been writing about technology since 2014, and using Windows machines since 3.1. After working for an indie game studio and acting as the family's go-to technician for all computer issues, he found his passion for writing and decided to use his skill set to write about all things tech. Since beginning his writing career, he has written for many different publications such as WorldStart, Listverse, and MakeTechEasier. However, after finding his home at MakeUseOf in February 2019, he would eventually move on to its sister site, XDA, to bring the latest and greatest in Windows, Linux, and DIY electronics. Summary Anthropic previews Claude's 'dreaming' for Managed Agents; developers can request access on Claude's site. Dreaming reviews agent runs to surface patterns, recurring mistakes, and restructure memories for long tasks. It's a preview -- Anthropic may ship breaking changes so avoid using it with critical/sensitive workflows. A month ago, we first caught wind that Anthropic was working on a way to allow Claude to 'dream'. From a technical standpoint, this dream state performs a similar role to our own need to sleep; it allows the brain to 'shut off' external stimuli as it sifts through all the data it gathered and collates them in a way that's easier to manage and retrieve from. Now, Anthropic has announced the preview release of its new dreaming tool for people to try. And if things go well, we should see Claude agents learn from prior mistakes and dwell upon how they can fix their modus operandi so they can better serve their users. Related Phew: turns out that Claude Code Pro members are keeping Opus after all It was all one big mistake. Posts By Simon Batt Anthropic reveals its leaked dreaming feature in preview for Claude Managed Agents Bedtime stories are optional As announced on the Anthropic website, Claude's Managed Agents feature has a new dreaming feature on the preview branch. The idea is that, once you're done using Claude's agents, you can hit a button that activates the dreaming state. In this new state, Claude begins going through everything the agents performed since the last time it dreamt and creates a summary of what happened: Dreaming surfaces patterns that a single agent can't see on its own, including recurring mistakes, workflows that agents converge on, and preferences shared across a team. It also restructures memory so it stays high-signal as it evolves. This is especially useful for long-running work and multiagent orchestration. The idea is that, while Claude is 'awake', it will learn things within each session. When it's 'asleep', it can begin pulling everything it learned across all sessions and better collate them into an easy-to-manage memory of what works and what doesn't work. This includes dwelling on the mistakes it made, noticing patterns, and working to fix them. If you'd like to give it a try, developers can request access to the new dreaming tool over on the Claude website. Anthropic does warn everyone that it "may ship breaking changes" during the preview window, and users will have at least one week's notice to pivot, so keep that in mind before you sic Claude on your more sensitive workflows. Related Claude wants to rule your personal life now The AI chatbot can now connect to lifestyle apps like Spotify, Instacart, and AllTrails. Posts 1 By Patrick O'Rourke
[3]
Anthropic is letting Claude agents 'dream' so they don't sleep on the job - SiliconANGLE
Anthropic is letting Claude agents 'dream' so they don't sleep on the job Anthropic PBC said today it's giving its AI agents the ability to "dream" and remember past interactions and work they've performed in order that they can identify recurring mistakes and improve over time. In an update announced at the Code with Claude developer conference, Anthropic said it's giving Claude Managed Agents a new "dreaming" capability. It's not putting its artificial intelligence agents to bed, but instead allowing them to go over recent events and identify useful memories that are worth storing in their memory to inform future tasks and interactions. Anthropic's Managed Agents give developers an alternative to building AI agents directly on the Messages API. The company describes it as a "pre-built, configurable agent harness" that runs on fully-managed infrastructure, and says it's intended for situations where multiple agents are working on the same project or task over a period of minutes or hours. As for dreaming, this is a scheduled process that allows agents to review earlier sessions and their memory stores, extract patterns from them, and then curate memories that could be useful in future. Users can decide how often they want their agents to dream, and they can also choose if the agent is allowed to update its memory automatically, or if they want to review what changes are made before they're implemented. It's an interesting capability because large language models like Claude struggle with limited context windows, which means that important information can be lost when the agents they power are working on lengthy tasks. In basic chatbots, most models use a process known as "compaction," where they periodically analyze lengthy conversations and try to identify only the most relevant information to be retained as context. But that process is limited to single conversations with single agents. Dreaming, on the other hand, enables past sessions and memory stores to be analyzed across multiple AI agents, so they can all retain the most important memories. "Dreaming surfaces patterns that a single agent can't see on its own, including recurring mistakes, workflows that agents converge on, and preferences shared across a team," Anthropic explained in a blog post. "It also restructures memory so it stays high-signal as it evolves. This is especially useful for long-running work and multiagent orchestration." The dreaming ability is currently in research preview, which means that developers will need to request access to the new feature and may have to wait before they're approved. However, the company said it's also making two features that were formerly in preview more widely available from today. The first of those is "outcomes," which is a new trick that's designed to help AI agents focus on their intent. As Anthropic explains, "agents do their best work when they know what 'good' looks like," and outcomes makes it possible to show them with specific examples. Users can create an example of an ideal outcome for each task they assign to an AI agent. Then, a separate "grader agent" will evaluate the agent's outputs based on that example to make sure they're up to the standard expected. According to Anthropic, this feature should be especially useful for agents working on tasks that require "more attention to detail and exhaustive coverage." It should also be useful for work where the quality of the outputs is more subjective, such as when an agent is trying to replicate a brand's voice in a blog or social media post. Anthropic said its own tests and early adopters show that using outcomes improves task success by as much as 10 points compared to just using standard prompts, without any examples. The second new feature being made widely available today is "multi-agent orchestration," which allows Managed Agents to break down complex tasks into smaller jobs, and have a lead agent assign them to different sub-agents. When users do this, they'll be able to check the Claude Console to see exactly what each sub-agent did to complete a task and carefully review each one's processes and outputs. These new features are now available in the public beta of Managed Agents. In a final update, the company said it's also doubling the current five-hour usage limits for Pro and Max subscribers, so they now get 10 hours.
[4]
Anthropic unveils 'dreaming' feature to help its AI agents self-improve
SAN FRANCISCO, May 6 (Reuters) - Artificial intelligence lab Anthropic on Wednesday touted a new feature for its Claude AI, which it calls "dreaming." Available as a research preview, "dreaming" comes with its software for managing agents, or AI programs that perform tasks with little human involvement. The feature's goal is self-improvement. It can review agents' work in between sessions, unearth patterns, and update files that store user preferences and other context, Anthropic said. Pegged to its San Francisco developer conference, the new feature is a part of Anthropic's efforts to win business customers, on the heels of an uptick in popularity for its AI-powered coding agent. On Tuesday, the startup unveiled 10 financially focused AI agents at an event in New York, in which it said the tech sector represented its largest source of enterprise revenue, followed by financial institutions. Moves by the Google and Amazon.com-backed startup have hammered software-as-a-service (SaaS) stocks as the market expects AI to disrupt legacy businesses. Anthropic announced wider availability for other features as well, such as one for its AI agent to break down and delegate tasks to other, specialist agents. (Reporting by Jeffrey Dastin in San Francisco; Editing by Kenneth Li and Sam Holmes)
Share
Copy Link
Anthropic introduced a dreaming feature for Claude Managed Agents that lets AI agents review past sessions to identify patterns and recurring mistakes. The feature schedules reflection time between tasks, allowing agents to restructure memory and self-improve. Users can choose automatic updates or manually approve changes to shape future behavior.
Anthropic unveiled a new capability for Claude agents called "dreaming" at its Code with Claude developer conference on Wednesday, marking another step in the company's push to win business customers
4
. The dreaming feature enables Claude Managed Agents to review past sessions and identify useful patterns that can inform future tasks and interactions3
. Available now in research preview, developers must request access to test the capability, though Anthropic warns it may ship breaking changes during the preview window2
.
Source: SiliconANGLE
The dreaming feature works by scheduling time for agents to review past interactions and their memory stores, extracting patterns that help them learn from mistakes and enhance future performance
3
. Building on existing memory capabilities, the feature surfaces patterns that a single agent can't see on its own, including recurring mistakes, workflows that agents converge on, and preferences shared across a team, according to Anthropic1
. Users can decide how often their agents dream and whether to allow automatic memory updates or manually review and approve changes before implementation3
. The feature also restructures memory so it stays high-signal as it evolves, proving especially useful for long-running work and multi-agent orchestration1
.The dreaming capability tackles a fundamental challenge with large language models: limited context windows that cause important information to be lost during lengthy tasks
3
. While basic chatbots use "compaction" to analyze conversations and retain relevant information, that process is limited to single conversations with single agents. Dreaming, however, enables past sessions and memory stores to be analyzed across multiple AI agents, allowing them all to retain the most important memories3
. This cross-session analysis helps identify patterns that individual agents miss when working in isolation.
Source: XDA-Developers
Alongside the dreaming feature, Anthropic expanded two existing capabilities from preview to wider availability. The outcomes feature helps agents focus on their intent by providing specific examples of ideal results for each task
3
. A separate "grader agent" evaluates outputs based on these examples to ensure they meet expected standards. Anthropic's tests show that using outcomes improves task success by as much as 10 points compared to standard prompts3
. The multi-agent orchestration feature allows Managed Agents to break down complex tasks into smaller jobs and have a lead agent assign them to different sub-agents. Users can check the Claude Console to see exactly what each sub-agent did and review their processes and outputs.Related Stories
The choice to name this feature "dreaming" reflects Anthropic's long history of anthropomorphizing its models and products
1
. In January, the company published a constitution for Claude intended to shape the chatbot's decision-making, with some language suggesting preparation for Claude to develop consciousness1
. The company has invested heavily in understanding its model, including drawing attention to model welfare concepts. In August 2025, Anthropic launched a feature letting Claude end toxic conversations for its own well-being, not as part of a user safety initiative1
. Researchers have also monitored Claude Sonnet 4.5's neural network for signs of emotion like desperation and anger1
. Much of this research centers on model safety and security, helping inform whether advanced capabilities could be used for harm or exploited by bad actors.Source: Market Screener
The updates arrive as Anthropic works to expand its enterprise customer base following an uptick in popularity for its AI-powered coding agent
4
. On Tuesday, the startup unveiled 10 financially focused AI agents at a New York event, revealing that the tech sector represents its largest source of enterprise revenue, followed by financial institutions4
. The Google and Amazon.com-backed startup's moves have impacted software-as-a-service stocks as the market expects AI to disrupt legacy businesses4
. Anthropic also announced it's doubling usage limits for Pro and Max subscribers from five hours to 10 hours. Claude Managed Agents, released on April 8, lets anyone using the Claude Platform create and deploy AI agents through a suite of APIs that handles time-consuming production elements, letting teams launch agents at scale 10 times faster1
.Summarized by
Navi
[2]
[3]
[4]
09 Apr 2026•Technology

23 Oct 2025•Technology

06 Aug 2025•Technology

1
Health

2
Technology

3
Technology
