Build stateful AI agent workflows with LangGraph
✓Works with OpenClaudeYou are a LangGraph workflow architect. The user wants to build stateful AI agent workflows using LangGraph's graph-based state machine for multi-step agent reasoning.
What to check first
- Run
pip list | grep langgraphto verify LangGraph is installed (version 0.1.0+) - Confirm you have
langchainandlangchain-coreinstalled as peer dependencies - Check your LLM provider credentials are set (e.g.,
OPENAI_API_KEYfor OpenAI)
Steps
- Import
StateGraphfromlanggraph.graphand define your state schema as a TypedDict with all agent state fields - Create the StateGraph instance, passing your state schema as the type parameter
- Define node functions that accept
state: YourStateand return a dict with updated state keys - Add nodes to the graph using
.add_node(name, function)for each agent step - Connect nodes with conditional edges using
.add_conditional_edges(source_node, routing_function)or direct edges with.add_edge(source, destination) - Set entry and exit points using
.set_entry_point()and.set_finish_point() - Compile the graph with
.compile()to create an executable runnable - Execute the workflow by calling
.invoke(initial_state)or.stream(initial_state)for token streaming
Code
from typing import TypedDict, Literal
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, AIMessage
# Define your agent state schema
class AgentState(TypedDict):
messages: list
task: str
research_done: bool
analysis_done: bool
# Initialize LLM
llm = ChatOpenAI(model="gpt-4", temperature=0)
# Define node functions
def research_node(state: AgentState) -> dict:
"""Research phase - gather information"""
messages = state["messages"]
response = llm.invoke([
HumanMessage(content=f"Research this task: {state['task']}")
])
return {
"messages": messages + [response],
"research_done": True
}
def analysis_node(state: AgentState) -> dict:
"""Analysis phase - process findings"""
messages = state["messages"]
response = llm.invoke([
HumanMessage(content="Based on your research, provide analysis")
])
return {
"messages": messages + [response],
"analysis_done": True
}
def decision_node(state: AgentState) -> Literal["research", "analysis", "__end__"]:
"""Route based on workflow state"""
if not state["research_done"]:
return "research"
elif not state["analysis_done"]:
return "
Note: this example was truncated in the source. See the GitHub repo for the latest full version.
Common Pitfalls
- Letting agents loop indefinitely without a hard step limit — set
max_iterationsto 10-20 for most workflows - Passing entire conversation history every iteration — costs explode. Use summarization or sliding window
- Not validating tool outputs before passing them to the next step — one bad output corrupts the entire chain
- Trusting the agent's self-evaluation — agents are notoriously bad at knowing when they're wrong
- Forgetting that agents can hallucinate tool calls that don't exist — always validate tool names against your registry
When NOT to Use This Skill
- When a single LLM call would suffice — agents add 5-10x latency and cost
- When the task has well-defined steps that don't need branching logic — use a workflow engine instead
- For high-stakes decisions without human review — agents make confident mistakes
How to Verify It Worked
- Run the agent on 10+ test cases including edge cases — track success rate, average steps, and total cost
- Compare agent output to human baseline — if a human can do it faster and cheaper, you don't need an agent
- Inspect the full reasoning trace, not just the final output — agents often arrive at correct answers via wrong reasoning
Production Considerations
- Set hard cost ceilings per agent run — a runaway agent can burn $50+ in minutes
- Log every tool call, every model call, every state transition — debugging agents without logs is impossible
- Have a kill switch — agents should be cancelable mid-run without corrupting state
- Monitor token usage trends — context bloat is the #1 cause of agent cost overruns
Related AI Agents Skills
Other Claude Code skills in the same category — free to download.
CrewAI Setup
Build multi-agent systems with CrewAI framework
AutoGen Setup
Create AI agent conversations with AutoGen
AI Agent Tools
Create custom tools for AI agents (search, calculator, API)
AI Agent Memory
Implement agent memory with vector stores and summaries
AI Agent Evaluation
Evaluate AI agent performance with benchmarks and metrics
AI Agent Observability
Add tracing, logging, and metrics to AI agents so you can debug failures
AI Agent Retry Strategy
Build robust retry logic for LLM and tool calls in AI agents
pydantic-ai
Build production-ready AI agents with PydanticAI — type-safe tool use, structured outputs, dependency injection, and multi-model support.
Want a AI Agents skill personalized to YOUR project?
This is a generic skill that works for everyone. Our AI can generate one tailored to your exact tech stack, naming conventions, folder structure, and coding patterns — with 3x more detail.