Free 40-page Claude guide — download today
April 20, 2026Samarth at CLSkillsclaude codeagent teamsmulti-agent

Claude Code Agent Teams: How Multiple Claude Sessions Now Coordinate On One Task (Practical Guide)

Anthropic added Agent Teams to Claude Code — multiple Claude sessions that message each other directly, share a task list, and coordinate with file-locking. Experimental flag required. Here's how it works, when to use it, and what breaks.

Claude Code Agent Teams: Multiple Sessions That Actually Talk To Each Other

Anthropic quietly rolled out Agent Teams in Claude Code over March-April 2026. The short version: you can now run multiple Claude Code sessions that message each other directly, share a task list, and self-coordinate. It's experimental and off by default, but it works, and it's a meaningfully different primitive than what was there before.

I've been running it for a week on real refactors. Here's what it actually does, when it earns its keep, and the sharp edges nobody talks about.

TL;DR

  1. Agent Teams is opt-in experimental — you flip CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 in your shell to enable it.
  2. Requires Claude Code v2.1.32 or later.
  3. Unlike subagents (which only report up to a parent), teammates in an Agent Team message each other directly through a mailbox system and self-claim work via a shared task list.
  4. Single-terminal mode (cycle teammates with Shift+Down) or split-pane mode (tmux / iTerm2 panes) both supported.
  5. Sweet spot is 3-5 teammates on parallel work where outputs compete (research, code review, debugging competing hypotheses).
  6. Token cost scales roughly linearly with teammate count — no magical multiplication of capability, but the coordination gains are real on the right task.
  7. Known limits: no /resume support yet, no nested teams, one team per session.

The Boring But Important Distinction: Agent Teams Is Not Subagents

Claude Code has had subagents for months. If you've used them, you know the shape: the main Claude spawns a subagent, the subagent does a narrow task in its own context window, returns a report, and disappears. Main Claude synthesizes and continues. Subagents don't talk to each other. They don't persist state across turns. They're a fan-out pattern, not a team pattern.

Agent Teams is genuinely different:

  • Teammates persist. Each teammate has its own context window and stays alive for the duration of the task.
  • Teammates message each other. There's a mailbox system. Teammate A can send a direct message to Teammate B ("I found X in auth/, can you cross-check in session/?").
  • Shared task list. All teammates see the same list of tasks and self-claim by writing to a shared file with locking semantics.
  • No hierarchy required. Subagents have a parent; Agent Teams can be flat or have a designated team lead who mostly orchestrates.

Mental model: subagents are function calls, Agent Teams is threads with shared memory.

How To Turn It On

First, check your Claude Code version:

claude --version

If you're below v2.1.32, update:

curl -fsSL https://claude.ai/install.sh | sh

Then enable the experimental flag. Add to your shell profile:

export CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1

Restart your shell or re-source the profile, then start Claude Code as normal:

claude

You'll know it's active when you see team-related slash commands in the help menu — /team, /teammate, /message, /tasks.

Your First Team — 3 Teammates, One Refactor

The standard way to kick this off is with /team inside Claude Code. The main session becomes your team lead. You then add teammates:

/teammate add reviewer
/teammate add debugger
/teammate add writer

Each teammate gets its own name, its own system prompt (you can customize per-role), and its own context window. You can now:

  • Cycle between teammates in the same terminal with Shift+Down / Shift+Up.
  • Or spawn each teammate in a tmux / iTerm2 pane and watch them work in parallel.

The shared task list lives in .claude/team/tasks.json in your repo root. When you add a task, any teammate can see it, claim it, and work it.

When Agent Teams Actually Earns Its Keep

I tested this on 8 different tasks over a week. Here's the honest breakdown of where it wins and where it loses.

Wins (measurable speedup or quality lift)

Parallel codebase research. Refactor planning where you need to understand 3-4 independent subsystems before making a decision. Each teammate maps one subsystem, then messages findings to the lead. Faster than serial reading and the context windows don't blow up.

Adversarial review with competing hypotheses. One teammate proposes a solution, another teammate role-plays the senior reviewer who hates it, a third teammate reconciles. The back-and-forth produces better outputs than any single Claude talking to itself (verified by blind-grading outputs from both modes on 6 real PRs).

Debugging flaky tests with multiple failure modes. One teammate owns the race-condition hypothesis, one owns the fixture-ordering hypothesis, one owns the mock-leakage hypothesis. They run experiments in parallel and message updates. Found a real bug in my test suite that I'd been ignoring for a month.

Losses (don't use Agent Teams for these)

Single-file edits. The coordination overhead eats the speedup. Just use one Claude session.

Sequential tasks where step 2 depends on step 1 output. Parallelism provides no benefit. The teammates end up waiting on each other and you've paid 3x the tokens.

Quick questions. "What does this regex do?" doesn't need a team. It needs a 5-second answer.

Anything under ~30 minutes of work. The setup cost (defining teammates, seeding tasks, orchestrating) isn't amortized on short tasks.

The Sharp Edges

Token cost scales linearly. A 4-teammate session burns ~4x the tokens of a solo session. Agent Teams doesn't give you free parallelism — you're paying for it. Budget accordingly, especially on Opus 4.7 where that adds up fast.

No /resume support yet. If your Claude Code session dies mid-task, the team state is lost. The task list in .claude/team/tasks.json survives, but teammate context windows don't. Known limitation per the docs; Anthropic says resume support is planned.

One team per session. You can't have two concurrent teams. If you want to work on two unrelated projects with separate teams, you need two Claude Code windows.

No nested teams. A teammate can't spawn its own team. So no recursive fan-out. This is actually fine — nested teams would be a coordination nightmare.

Shared task list has basic file-locking. Works, but I saw one race condition where two teammates claimed the same task in a ~50ms window. Rare but non-zero. Spot-check the task list periodically.

The Prompt Codes That Compound On Agent Teams

This is where my existing testing on prompt codes (I tested 120 codes over 3 months) starts mattering again. A few prompts multiply their effect in a team setting:

  • CRIT on the reviewer teammate. Instead of the reviewer producing a gentle critique, CRIT forces 3 specific flaws. The team lead now has real feedback to act on, not just validation.
  • /skeptic on the team lead. Before synthesizing teammate reports, the lead reframes the question. Prevents cascade failures where teammates answer the wrong question together.
  • /blindspots on a dedicated "auditor" teammate. One teammate's entire job is surfacing what the rest missed. Meta, but it works.

Full classification of which codes actually shift reasoning vs just reshape output is in the Cheat Sheet — Pro tier includes Agent Teams-specific combos.

Is It Ready For Your Production Workflow?

Honest assessment: not quite, but close.

The capability is real. On the right task, it's a legitimate productivity lift. But the experimental flag + no resume + one-team-per-session means it's not something I'd ship in a team SOP yet. Treat it as a tool you reach for on specific hard tasks, not a default mode.

If you're doing serious Claude Code work, turn on the flag, try it on one real problem this week, and form your own opinion. My prediction: by v2.2.x (likely June 2026) Anthropic will have added resume support + multi-team support, and at that point it becomes a default for most non-trivial engineering tasks.

Comparison With Subagents

If you're unsure when to reach for Agent Teams vs existing Claude Code subagents, I wrote a separate post comparing them head-to-head: Agent Teams vs Subagents in Claude Code — When to Use Which.

What About Claude Cowork?

If you got here searching for "Claude Cowork" and are confused because this post is about Claude Code — they're different products. Claude Cowork is Anthropic's desktop agent for non-technical office workers (Google Drive, Gmail, DocuSign integrations). I wrote a separate explainer: What Claude Cowork Actually Is — And Who It's For.

Sources

  • Official Claude Code docs on Agent Teams (code.claude.com/docs/en/agent-teams)
  • My own testing: 8 tasks over 7 days, mix of refactoring, debugging, and review work
  • Hacker News discussion (item 46902368) for community bug reports and tips
  • Claude Code v2.1.32 release notes

Questions about a specific Agent Teams workflow or pattern? Hit reply on my newsletter or drop a question at /prompts — I answer every email.

Want the full research library?

120 tested Claude prompt codes with before/after output and token deltas.

See the Cheat Sheet — $15