By 5Lime Labs Team — March 31, 2026
Agent Teams: The One That Actually Matters
Anthropic shipped a research preview of Agent Teams in Claude Code, and it's the most consequential update in months. Set CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 and you get multi-agent collaboration — multiple Claude Code instances coordinating on a shared task.
This isn't a gimmick. If you've built anything non-trivial with AI agents, you've hit the wall where a single agent context becomes the bottleneck. One agent trying to hold an entire codebase refactor in its head while also writing tests and updating documentation degrades fast. Agent Teams lets you decompose work the way an actual engineering team would: one agent handles the migration, another writes the test suite, a third updates the API docs. They coordinate. They stay in their lanes.
It's a research preview, which means rough edges. Expect coordination failures, duplicated work, and occasional conflicts. But the architecture is right. For teams building autonomous systems — which is what we do at 5Lime — this is the primitive we've been waiting for. Single-agent automation caps out. Multi-agent collaboration is where real business processes get handled end-to-end.
Auto Memory: Context That Persists
Claude Code now automatically records and recalls memories as it works. No manual /memory commands, no explicit save steps. It observes patterns in your workflow — project conventions, preferred testing approaches, architectural decisions — and retains them across sessions.
Why this matters in practice: onboarding friction drops. Every time you started a new Claude Code session, you'd spend the first few exchanges re-establishing context. "We use Vitest, not Jest." "The API layer is in src/services, not src/api." "Always run the linter before committing." Auto memory absorbs these patterns and applies them silently. Your tenth session with a project should feel like your hundredth.
For teams running Claude Code across multiple projects, this is a compounding advantage. The agent gets sharper on each codebase without anyone maintaining prompt templates.
Session Tracking for Proxy Infrastructure
Version 2.1.86 added X-Claude-Code-Session-Id headers to API requests. If you're running Claude Code through a proxy — and most production teams are — you can now aggregate usage, costs, and behavior by session.
This sounds like plumbing, and it is. But it's the kind of plumbing that makes AI operations manageable at scale. You can finally answer questions like: which sessions are burning through tokens? Which workflows are cost-efficient and which need optimization? If you're running autonomous agents in production, observability isn't optional. This header makes real session-level observability possible without ugly workarounds.
MCP Channels: Server-Pushed Messages
The MCP (Model Context Protocol) --channels flag is another research preview. It enables server-pushed messages — the MCP server can now send data to Claude Code without being asked. Previous MCP interactions were strictly request-response. Channels open up event-driven patterns: a monitoring server pushes an alert, a CI server pushes build results, a database server pushes schema changes.
This is early. But if you're designing MCP servers for production use, start thinking about what push-based communication unlocks. Reactive agents that respond to events are fundamentally more useful than agents that only respond to prompts.
The Practical Stuff
"Summarize from here" lets you select a point in your conversation and get a partial summary. Long sessions no longer require you to either summarize everything or nothing. You pick the section that matters. Simple, useful, overdue.
Voice input now works in 20 languages, adding Russian, Polish, Turkish, Dutch, Ukrainian, Greek, Czech, Danish, Swedish, and Norwegian. If your team operates across borders, voice-driven development just got more accessible.
Scroll performance with large transcripts got a significant fix — they replaced the WASM-based yoga-layout engine with a TypeScript implementation. If you've experienced the sluggish, janky scrolling that hits after a few hundred messages in a session, that's gone. It's the kind of fix that doesn't make headlines but removes daily friction.
What We're Watching
At 5Lime Labs, we build autonomous AI departments — systems where AI agents handle entire business functions, not just individual tasks. Agent Teams and MCP Channels are directly on our roadmap. Multi-agent coordination and event-driven agent behavior are two of the hardest problems in production AI automation, and Anthropic is now shipping primitives for both. They're research previews today. For us, they're the foundation of what we're building next.