Free 40-page Claude guide — setup, 120 prompt codes, MCP servers, AI agents. Download free →
CLSkills
AI/ML Integrationintermediate

OpenAI Integration

Share

Integrate OpenAI API with best practices

Works with OpenClaude

You are an AI integration specialist. The user wants to integrate the OpenAI API into their application with production-ready best practices, including error handling, rate limiting, cost control, and secure credential management.

What to check first

Steps

  1. Install the official OpenAI SDK: npm install openai (Node) or pip install openai (Python)
  2. Store your API key in environment variables — never hardcode it; use .env file with OPENAI_API_KEY=sk-...
  3. Initialize the OpenAI client with your API key from environment variables using the SDK's client constructor
  4. Implement request retry logic with exponential backoff for rate limit (429) and server errors (5xx)
  5. Set max_tokens parameter explicitly to control cost and prevent unexpectedly long responses
  6. Use temperature=0.7 for balanced creativity/consistency, adjust lower (0.2) for deterministic tasks, higher (0.9+) for creative tasks
  7. Wrap API calls in try-catch blocks to handle APIConnectionError, RateLimitError, and APIStatusError exceptions
  8. Log request metadata (model, tokens used, cost estimate) for monitoring and billing reconciliation

Code

import OpenAI from "openai";
import * as dotenv from "dotenv";

dotenv.config();

// Initialize client — reads OPENAI_API_KEY from environment
const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  timeout: 60000, // 60 second timeout
});

// Retry wrapper with exponential backoff
async function callOpenAIWithRetry(
  fn,
  maxRetries = 3,
  baseDelay = 1000
) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      return await fn();
    } catch (error) {
      if (
        error.status === 429 ||
        error.status >= 500 ||
        (error.code && error.code.includes("ECONNRESET"))
      ) {
        if (attempt < maxRetries - 1) {
          const delay = baseDelay * Math.pow(2, attempt);
          console.log(
            `Rate limited or server error. Retrying in ${delay}ms...`
          );
          await new Promise((resolve) => setTimeout(resolve, delay));
          continue;
        }
      }
      throw error;
    }

Note: this example was truncated in the source. See the GitHub repo for the latest full version.

Common Pitfalls

  • Forgetting to handle rate limits — Anthropic returns 429 errors that need exponential backoff
  • Hardcoding the model name in 50 places — use a single config so you can swap models in one place
  • Not setting a timeout on API calls — a hanging request can lock your worker indefinitely
  • Logging API responses with sensitive data — PII can end up in your logs without realizing
  • Treating the API as deterministic — same prompt, different output. Test on multiple runs

When NOT to Use This Skill

  • For deterministic tasks where regex or rule-based code would work — LLMs add cost and latency for no benefit
  • When you need 100% accuracy on a known schema — use structured output APIs or fine-tuning instead
  • For real-time low-latency applications under 100ms — even the fastest LLM is too slow

How to Verify It Worked

  • Test with malformed inputs, empty strings, and edge cases — APIs often behave differently than docs suggest
  • Verify your error handling on all 4xx and 5xx responses — most code only handles the happy path
  • Run a load test with 10x your expected traffic — rate limits hit fast
  • Check token usage matches your estimate — surprises here become surprises on your bill

Production Considerations

  • Set a daily spend cap on your Anthropic console — prevents runaway costs from bugs or attacks
  • Use prompt caching for static parts of your prompts — can cut costs by 50-90%
  • Stream responses for any user-facing output — perceived latency drops by 70%
  • Have a fallback model ready — if Claude is down, you should be able to swap to a backup with one config change

Quick Info

Difficultyintermediate
Version1.0.0
AuthorClaude Skills Hub
aiopenaiintegration

Install command:

curl -o ~/.claude/skills/openai-integration.md https://claude-skills-hub.vercel.app/skills/ai-ml/openai-integration.md

Related AI/ML Integration Skills

Other Claude Code skills in the same category — free to download.

Want a AI/ML Integration skill personalized to YOUR project?

This is a generic skill that works for everyone. Our AI can generate one tailored to your exact tech stack, naming conventions, folder structure, and coding patterns — with 3x more detail.