$120 tested Claude codes · real before/after data · Full tier $15 one-timebuy --sheet=15 →
$Free 40-page Claude guide — setup, 120 prompt codes, MCP servers, AI agents. download --free →
clskills.sh — terminal v2.4 — 2,347 skills indexed● online
[CL]Skills_
AI AgentsintermediateNew

AutoGen Setup

Share

Create AI agent conversations with AutoGen

Works with OpenClaude

You are an AI agent framework specialist. The user wants to set up AutoGen and create their first multi-agent conversation system.

What to check first

  • Run pip list | grep pyautogen to verify AutoGen is not already installed
  • Check Python version with python --version — AutoGen requires Python 3.8+
  • Verify you have an OpenAI API key set as OPENAI_API_KEY environment variable with echo $OPENAI_API_KEY

Steps

  1. Install AutoGen with pip install pyautogen (use pip install pyautogen[extra] for optional dependencies like code execution)
  2. Set your OpenAI API key: export OPENAI_API_KEY='your-key-here' on Linux/Mac or set OPENAI_API_KEY=your-key-here on Windows
  3. Import the required AutoGen classes: from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
  4. Create a configuration list for LLM models using either config_list_from_json() or manually defining config_list = [{"model": "gpt-4", "api_key": "..."}]
  5. Instantiate a UserProxyAgent that acts as the human interface with max_consecutive_auto_reply=10 to prevent infinite loops
  6. Create AssistantAgent instances for each specialized AI agent role with system_message defining their expertise
  7. Initiate conversation between agents using the initiate_chat() method, passing the initial message
  8. Review the conversation history in agent.chat_history to see all exchanges and decisions made by the agents

Code

import os
from autogen import AssistantAgent, UserProxyAgent

# Configure LLM settings
config_list = [
    {
        "model": "gpt-4",
        "api_key": os.getenv("OPENAI_API_KEY"),
    }
]

# Create UserProxyAgent (represents human in the conversation)
user_proxy = UserProxyAgent(
    name="Admin",
    system_message="You are a helpful admin. You ask for clarification and provide feedback.",
    human_input_mode="NEVER",  # Prevents waiting for user input
    max_consecutive_auto_reply=10,
    code_execution_config={"work_dir": "autogen_workspace"},
)

# Create first AssistantAgent - specialized in coding
coder = AssistantAgent(
    name="Coder",
    system_message="You are an expert Python developer. Write clean, efficient code with docstrings.",
    llm_config={"config_list": config_list, "temperature": 0.7},
)

# Create second AssistantAgent - specialized in review
reviewer = AssistantAgent(
    name="Reviewer",
    system_message="You are a code reviewer. Analyze code for bugs, security, and best practices.",
    llm_

Note: this example was truncated in the source. See the GitHub repo for the latest full version.

Common Pitfalls

  • Letting agents loop indefinitely without a hard step limit — set max_iterations to 10-20 for most workflows
  • Passing entire conversation history every iteration — costs explode. Use summarization or sliding window
  • Not validating tool outputs before passing them to the next step — one bad output corrupts the entire chain
  • Trusting the agent's self-evaluation — agents are notoriously bad at knowing when they're wrong
  • Forgetting that agents can hallucinate tool calls that don't exist — always validate tool names against your registry

When NOT to Use This Skill

  • When a single LLM call would suffice — agents add 5-10x latency and cost
  • When the task has well-defined steps that don't need branching logic — use a workflow engine instead
  • For high-stakes decisions without human review — agents make confident mistakes

How to Verify It Worked

  • Run the agent on 10+ test cases including edge cases — track success rate, average steps, and total cost
  • Compare agent output to human baseline — if a human can do it faster and cheaper, you don't need an agent
  • Inspect the full reasoning trace, not just the final output — agents often arrive at correct answers via wrong reasoning

Production Considerations

  • Set hard cost ceilings per agent run — a runaway agent can burn $50+ in minutes
  • Log every tool call, every model call, every state transition — debugging agents without logs is impossible
  • Have a kill switch — agents should be cancelable mid-run without corrupting state
  • Monitor token usage trends — context bloat is the #1 cause of agent cost overruns

Quick Info

CategoryAI Agents
Difficultyintermediate
Version1.0.0
AuthorClaude Skills Hub
autogenai-agentsconversations

Install command:

curl -o ~/.claude/skills/autogen-setup.md https://clskills.in/skills/ai-agents/autogen-setup.md

Related AI Agents Skills

Other Claude Code skills in the same category — free to download.

Want a AI Agents skill personalized to YOUR project?

This is a generic skill that works for everyone. Our AI can generate one tailored to your exact tech stack, naming conventions, folder structure, and coding patterns — with 3x more detail.