What's new in Claude Managed Agents — Dreaming, Outcomes, Multiagent, Webhooks

💡

A month after the April 8 beta launch, Managed Agents added two new features and promoted three from research preview to public beta. Dreaming (research preview) curates memory between sessions for self-improvement, and outcomes, multiagent orchestration, memory, and webhooks all ship as public beta.

🔗 Official announcement →

This article is a summary based on official documentation.

Overview

The Claude Managed Agents beta from April 8, 2026 expanded substantially in its first month. The May 6 announcement adds two new capabilities (Dreaming, Webhooks) and graduates outcomes, multiagent orchestration, and memory from research preview to public beta. Where the launch was about “managed agent infrastructure,” this update extends it into “agents that learn and collaborate.”

Key features

  • Dreaming (research preview)

    Memory captured during work accumulates noise over time, degrading signal quality. Dreaming periodically reviews past sessions and memory stores, extracts patterns — recurring mistakes, workflow convergence, team preferences — and restructures memory accordingly. Agents can update memory automatically, or queue changes for human review before applying. Harvey reported ~6× higher completion rates from their agents; the announcement calls out long-running work and multiagent setups as the strongest fits. Access is via a request form at launch.

  • Outcomes (graduated to public beta)

    Standard prompting loops let the agent grade its own output inside its own context, which dilutes self-correction. Outcomes lets developers define success criteria as a rubric; a separate grader evaluates the output against those criteria in its own context window, so it isn’t influenced by the agent’s reasoning. Anthropic measured up to 10 points improvement in task success over a standard prompting loop, with +8.4% on docx generation and +10.1% on pptx.

  • Multiagent Orchestration (graduated to public beta)

    Many real jobs are too long-context or too domain-heterogeneous for one agent. Multiagent Orchestration introduces a lead agent that breaks the job into pieces and delegates each one to a specialist with its own model, prompt, and tools. Specialists work in parallel on a shared filesystem with persistent event histories, and every delegated step is fully traceable in Claude Console. Netflix uses it to process logs from hundreds of builds; Spiral runs Haiku as the lead agent and Opus as subagents.

  • Webhooks (public beta)

    Long-running async agent jobs previously required client-side polling for completion. Webhooks make it a first-class pattern: “define an outcome, let the agent run, and get notified by a webhook when it’s done.”

  • Memory (graduated to public beta)

    Memory, which was a research preview at the April launch, is now public beta. Each agent captures what it learns during work; Dreaming refines those learnings between sessions and surfaces patterns across multiple agents. Memory and Dreaming form a paired loop.

Notes

  • outcomes, multiagent, and memory are now default-on — at launch each required a separate access request. They’re public beta and available on every Managed Agents account.
  • Dreaming still requires a request form — the only research-preview holdout. Because memory signal quality has outsized operational impact, validate the effect on your own workflow before adopting widely.
  • The Outcomes grader runs in a separate context — the more concrete the rubric, the more value the grader can add.
  • Multiagent cost shaping — like Spiral’s setup, mixing a fast lead model with stronger subagents lets you tune the cost/quality tradeoff per role.
  • All new features run on top of the Managed Agents API — itself still beta, requiring the managed-agents-2026-04-01 beta header (SDKs set this automatically).

Frequently Asked Questions

What's the headline of this announcement?

Two new features (Dreaming, Webhooks) and three existing ones (Outcomes, Multiagent Orchestration, Memory) graduating to Public Beta. A month after launch, Managed Agents extends from "managed infrastructure" into "agent self-improvement and collaboration."

What is Dreaming and what stage is it?

A research preview feature that periodically reviews past sessions and memory stores, extracts patterns, and curates memory. Aimed at agent self-improvement; Harvey's agents reportedly saw ~6× completion-rate gains. Access requires a request form.

How does Outcomes work?

Developers define success criteria as a rubric; a separate grader evaluates the output in its own context window so it isn't influenced by the agent's reasoning. Testing showed up to 10 points improvement over a standard prompting loop, with +8.4% on docx generation and +10.1% on pptx.

What is Multiagent Orchestration for?

A lead agent breaks the job into pieces and delegates each one to a specialist subagent with its own model, prompt, and tools, working in parallel on a shared filesystem with full tracing in Claude Console. Netflix uses it for processing logs from hundreds of builds; Spiral pairs Haiku as the lead with Opus subagents.

What problem do Webhooks solve?

Long-running async agent jobs previously required clients to poll for completion. Webhooks let you "define an outcome, let the agent run, and get notified by a webhook when it's done" as a first-class pattern. Public beta.

How does this affect existing Managed Agents users?

Outcomes, multiagent, and memory — all of which previously required separate access requests — are now public beta and enabled by default. Only Dreaming still requires a request form.