$120 tested Claude codes · real before/after data · Full tier $15 one-timebuy --sheet=15 →
$Free 40-page Claude guide — setup, 120 prompt codes, MCP servers, AI agents. download --free →
clskills.sh — terminal v2.4 — 2,347 skills indexed● online
[CL]Skills_
AI AgentsintermediateNew

AI Agent Tools

Share

Create custom tools for AI agents (search, calculator, API)

Works with OpenClaude

You are an AI engineer building extensible tool systems for autonomous agents. The user wants to create custom tools (search, calculator, API wrappers) that agents can discover, call, and chain together.

What to check first

  • Verify you have anthropic SDK installed: pip list | grep anthropic
  • Check your ANTHROPIC_API_KEY environment variable is set: echo $ANTHROPIC_API_KEY
  • Ensure requests is installed for HTTP calls: pip install requests

Steps

  1. Define tool schemas as JSON with name, description, and input_schema (object with properties and required fields)
  2. Create handler functions that match each tool's input parameters exactly
  3. Initialize the Anthropic client and enable tool use in the model call
  4. Build a tool dispatch dictionary mapping tool names to handler functions
  5. Implement an agent loop that calls messages.create() with tools parameter
  6. Parse tool_use blocks from the response and execute matching handlers
  7. Return tool results back to the agent in the conversation with tool_result blocks
  8. Continue looping until the agent returns stop_reason: "end_turn" (no more tool calls)

Code

import anthropic
import json
import requests
import math

# Initialize client
client = anthropic.Anthropic()

# Define tools as JSON schemas
tools = [
    {
        "name": "calculator",
        "description": "Perform basic arithmetic operations: add, subtract, multiply, divide, power, sqrt",
        "input_schema": {
            "type": "object",
            "properties": {
                "operation": {
                    "type": "string",
                    "enum": ["add", "subtract", "multiply", "divide", "power", "sqrt"],
                    "description": "The arithmetic operation to perform"
                },
                "a": {
                    "type": "number",
                    "description": "First number (required for all operations)"
                },
                "b": {
                    "type": "number",
                    "description": "Second number (required for add, subtract, multiply, divide, power; omit for sqrt)"
                }
            },
            "required": ["operation", "a"]
        }
    },
    {
        "name": "web_search",
        "description": "Search the web for information about a topic",
        "input_schema": {
            "type": "object",
            "properties": {
                "query": {
                    "type": "string",
                    "description": "The search query to execute"
                }
            },
            "required": ["query"]
        }
    },
    {
        "name": "fetch_api",
        "description": "Make HTTP GET request to an API endpoint",
        "input_schema": {
            "type": "object",
            "properties": {
                "url": {
                    "type": "string",

Note: this example was truncated in the source. See the GitHub repo for the latest full version.

Common Pitfalls

  • Letting agents loop indefinitely without a hard step limit — set max_iterations to 10-20 for most workflows
  • Passing entire conversation history every iteration — costs explode. Use summarization or sliding window
  • Not validating tool outputs before passing them to the next step — one bad output corrupts the entire chain
  • Trusting the agent's self-evaluation — agents are notoriously bad at knowing when they're wrong
  • Forgetting that agents can hallucinate tool calls that don't exist — always validate tool names against your registry

When NOT to Use This Skill

  • When a single LLM call would suffice — agents add 5-10x latency and cost
  • When the task has well-defined steps that don't need branching logic — use a workflow engine instead
  • For high-stakes decisions without human review — agents make confident mistakes

How to Verify It Worked

  • Run the agent on 10+ test cases including edge cases — track success rate, average steps, and total cost
  • Compare agent output to human baseline — if a human can do it faster and cheaper, you don't need an agent
  • Inspect the full reasoning trace, not just the final output — agents often arrive at correct answers via wrong reasoning

Production Considerations

  • Set hard cost ceilings per agent run — a runaway agent can burn $50+ in minutes
  • Log every tool call, every model call, every state transition — debugging agents without logs is impossible
  • Have a kill switch — agents should be cancelable mid-run without corrupting state
  • Monitor token usage trends — context bloat is the #1 cause of agent cost overruns

Quick Info

CategoryAI Agents
Difficultyintermediate
Version1.0.0
AuthorClaude Skills Hub
ai-agentstoolscustom

Install command:

curl -o ~/.claude/skills/ai-agent-tools.md https://clskills.in/skills/ai-agents/ai-agent-tools.md

Related AI Agents Skills

Other Claude Code skills in the same category — free to download.

Want a AI Agents skill personalized to YOUR project?

This is a generic skill that works for everyone. Our AI can generate one tailored to your exact tech stack, naming conventions, folder structure, and coding patterns — with 3x more detail.