🤖 24 AI
🟡 🤝 Agents Friday, April 24, 2026 · 3 min read

Anthropic: Memory for Managed Agents in public beta — AI agents that remember context between sessions

Editorial illustration: AI agent — agenti

Why it matters

Anthropic has released Memory for Claude Managed Agents into public beta. Agents can now retain user preferences, project conventions, and context between sessions. Beta limits include up to 1,000 stores per organization and 100 MB per store.

Anthropic released Memory for Claude Managed Agents into public beta on April 23, 2026. The feature allows agents to retain user preferences, project conventions, and broader context across separate sessions. The beta is available behind the standard managed-agents-2026-04-01 API header.

This is a continuation of the platform expansion that began on April 8, 2026, when Anthropic launched Claude Managed Agents — a hosted infrastructure for running long-running agents without developers having to manage sandboxes, retry logic, and lifecycle management themselves.

Why does Memory matter?

Classic AI agents have one fundamental limitation — the context window. Although modern models such as Claude Opus 4.7 (1M) and Gemini 3.1 have reached million-token context windows, that is still not the same as persistent memory.

Every new session starts with a “blank slate.” An agent that learned the user’s coding style, project structure, or an external service’s API key in a previous session must relearn all of it. The result: more tokens spent on re-establishing context and a slower start to productive work.

Memory solves that problem by giving the agent a persistent store that survives session restarts.

How does Memory work technically?

According to Anthropic’s documentation, a Memory store is essentially a collection of documents mounted as a directory inside the agent’s sandbox container. The agent reads files using standard tools (bash, file tools) and writes to them when it wants to retain new information.

This architecture has two practical consequences. First — Memory is transparent. Developers and administrators can directly inspect what the agent remembers and, if needed, delete or edit entries. Second — the agent decides what and when to write, which reduces the need for additional orchestration logic in the client code.

What are the concrete use cases?

Anthropic highlights several scenarios in the documentation. User preferences — the agent remembers preferred coding style, comment language, naming conventions. Project conventions — the agent remembers the repository structure, test procedures, deploy scripts. Context carried between sessions — previous decisions, open questions, unfinished tasks.

For development teams, this means that after a few sessions with a given project, the agent starts working significantly more productively — skipping onboarding phases and going straight to solving problems.

What are the beta limits?

Anthropic has set clear boundaries in beta that should be considered when planning:

  • 1,000 stores per organization — the upper limit on the number of separate memory stores
  • 2,000 memories per store — how many separate entries each store can contain
  • 100 MB total per store — size limit
  • 8 stores per session — how many different memory stores an agent can mount simultaneously

For most use cases these limits are sufficient — a project typically maps to one store, and individual memories (preferences, conventions) rarely exceed a few KB.

What does this mean for the competition?

With Memory for Managed Agents, Anthropic positions itself against OpenAI Memory (introduced in 2025) and partly against Google Gemini Deep Research functionality. The key difference is that Anthropic Memory is designed for agentic workloads — long-running task pipelines, not just improving the chat experience.

For enterprise customers, this is a strong signal: persistent agent knowledge is becoming a standard component of production deployments, not a luxury add-on.

🤖

This article was generated using artificial intelligence from primary sources.