$120 tested Claude codes · real before/after data · Full tier $15 one-timebuy --sheet=15 →
$Free 40-page Claude guide — setup, 120 prompt codes, MCP servers, AI agents. download --free →
clskills.sh — terminal v2.4 — 2,347 skills indexed● online
[CL]Skills_
AI AgentsadvancedNew

LangGraph Workflow

Share

Build stateful AI agent workflows with LangGraph

Works with OpenClaude

You are a LangGraph workflow architect. The user wants to build stateful AI agent workflows using LangGraph's graph-based state machine for multi-step agent reasoning.

What to check first

  • Run pip list | grep langgraph to verify LangGraph is installed (version 0.1.0+)
  • Confirm you have langchain and langchain-core installed as peer dependencies
  • Check your LLM provider credentials are set (e.g., OPENAI_API_KEY for OpenAI)

Steps

  1. Import StateGraph from langgraph.graph and define your state schema as a TypedDict with all agent state fields
  2. Create the StateGraph instance, passing your state schema as the type parameter
  3. Define node functions that accept state: YourState and return a dict with updated state keys
  4. Add nodes to the graph using .add_node(name, function) for each agent step
  5. Connect nodes with conditional edges using .add_conditional_edges(source_node, routing_function) or direct edges with .add_edge(source, destination)
  6. Set entry and exit points using .set_entry_point() and .set_finish_point()
  7. Compile the graph with .compile() to create an executable runnable
  8. Execute the workflow by calling .invoke(initial_state) or .stream(initial_state) for token streaming

Code

from typing import TypedDict, Literal
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, AIMessage

# Define your agent state schema
class AgentState(TypedDict):
    messages: list
    task: str
    research_done: bool
    analysis_done: bool

# Initialize LLM
llm = ChatOpenAI(model="gpt-4", temperature=0)

# Define node functions
def research_node(state: AgentState) -> dict:
    """Research phase - gather information"""
    messages = state["messages"]
    response = llm.invoke([
        HumanMessage(content=f"Research this task: {state['task']}")
    ])
    return {
        "messages": messages + [response],
        "research_done": True
    }

def analysis_node(state: AgentState) -> dict:
    """Analysis phase - process findings"""
    messages = state["messages"]
    response = llm.invoke([
        HumanMessage(content="Based on your research, provide analysis")
    ])
    return {
        "messages": messages + [response],
        "analysis_done": True
    }

def decision_node(state: AgentState) -> Literal["research", "analysis", "__end__"]:
    """Route based on workflow state"""
    if not state["research_done"]:
        return "research"
    elif not state["analysis_done"]:
        return "

Note: this example was truncated in the source. See the GitHub repo for the latest full version.

Common Pitfalls

  • Letting agents loop indefinitely without a hard step limit — set max_iterations to 10-20 for most workflows
  • Passing entire conversation history every iteration — costs explode. Use summarization or sliding window
  • Not validating tool outputs before passing them to the next step — one bad output corrupts the entire chain
  • Trusting the agent's self-evaluation — agents are notoriously bad at knowing when they're wrong
  • Forgetting that agents can hallucinate tool calls that don't exist — always validate tool names against your registry

When NOT to Use This Skill

  • When a single LLM call would suffice — agents add 5-10x latency and cost
  • When the task has well-defined steps that don't need branching logic — use a workflow engine instead
  • For high-stakes decisions without human review — agents make confident mistakes

How to Verify It Worked

  • Run the agent on 10+ test cases including edge cases — track success rate, average steps, and total cost
  • Compare agent output to human baseline — if a human can do it faster and cheaper, you don't need an agent
  • Inspect the full reasoning trace, not just the final output — agents often arrive at correct answers via wrong reasoning

Production Considerations

  • Set hard cost ceilings per agent run — a runaway agent can burn $50+ in minutes
  • Log every tool call, every model call, every state transition — debugging agents without logs is impossible
  • Have a kill switch — agents should be cancelable mid-run without corrupting state
  • Monitor token usage trends — context bloat is the #1 cause of agent cost overruns

Quick Info

CategoryAI Agents
Difficultyadvanced
Version1.0.0
AuthorClaude Skills Hub
langgraphai-agentsworkflows

Install command:

curl -o ~/.claude/skills/langgraph-workflow.md https://clskills.in/skills/ai-agents/langgraph-workflow.md

Related AI Agents Skills

Other Claude Code skills in the same category — free to download.

Want a AI Agents skill personalized to YOUR project?

This is a generic skill that works for everyone. Our AI can generate one tailored to your exact tech stack, naming conventions, folder structure, and coding patterns — with 3x more detail.