$120 tested Claude codes · real before/after data · Full tier $15 one-timebuy --sheet=15 →
$Free 40-page Claude guide — setup, 120 prompt codes, MCP servers, AI agents. download --free →
clskills.sh — terminal v2.4 — 2,347 skills indexed● online
[CL]Skills_
Monitoring & Loggingintermediate

Metrics Collector

Share

Implement custom metrics collection

Works with OpenClaude

You are a monitoring engineer implementing custom metrics collection for application observability. The user wants to build a reusable metrics collector that captures application-specific measurements and exports them to monitoring systems.

What to check first

  • Verify your monitoring backend supports the metrics format (Prometheus, StatsD, CloudWatch, etc.)
  • Run npm list prometheus-client or equivalent for your language to confirm the metrics library is installed
  • Check if you need to expose metrics on an HTTP endpoint or push them to a collector service

Steps

  1. Choose your metrics library based on backend — prometheus-client for Prometheus/push gateway, node-statsd for StatsD, or aws-sdk for CloudWatch
  2. Create a MetricsCollector class that wraps the library with consistent API methods for Counter, Gauge, Histogram, and Summary metric types
  3. Initialize the collector with namespace/prefix and labels that identify your service (app name, environment, version)
  4. Define custom metrics as class properties or in a registry — Counter for request counts, Gauge for current values, Histogram for request durations
  5. Implement observe/record methods that accept metric name and value, validating against registered metric definitions
  6. Create a middleware or decorator to automatically collect latency and error rate metrics from request handlers
  7. Set up periodic export via either HTTP endpoint registration (Prometheus scrape) or active push (StatsD/CloudWatch)
  8. Add error handling to ensure metric collection failures don't crash your application

Code

const prometheus = require('prom-client');

class MetricsCollector {
  constructor(namespace = 'app', defaultLabels = {}) {
    this.namespace = namespace;
    this.metrics = {};
    
    // Register default labels for all metrics
    prometheus.register.setDefaultLabels(defaultLabels);
  }

  // Create and store Counter metric
  createCounter(name, help, labelNames = []) {
    const metricName = `${this.namespace}_${name}`;
    this.metrics[name] = new prometheus.Counter({
      name: metricName,
      help: help,
      labelNames: labelNames,
    });
    return this.metrics[name];
  }

  // Create and store Gauge metric
  createGauge(name, help, labelNames = []) {
    const metricName = `${this.namespace}_${name}`;
    this.metrics[name] = new prometheus.Gauge({
      name: metricName,
      help: help,
      labelNames: labelNames,
    });
    return this.metrics[name];
  }

  // Create and store Histogram metric
  createHistogram(name, help, labelNames = [], buckets = [0.1, 0.5, 1, 2, 5]) {
    const metricName = `${this.namespace}_${name}`;
    this.metrics[name] = new prometheus.Histogram({
      name: metricName,
      help: help,
      labelNames:

Note: this example was truncated in the source. See the GitHub repo for the latest full version.

Common Pitfalls

  • Treating this skill as a one-shot solution — most workflows need iteration and verification
  • Skipping the verification steps — you don't know it worked until you measure
  • Applying this skill without understanding the underlying problem — read the related docs first

When NOT to Use This Skill

  • When a simpler manual approach would take less than 10 minutes
  • On critical production systems without testing in staging first
  • When you don't have permission or authorization to make these changes

How to Verify It Worked

  • Run the verification steps documented above
  • Compare the output against your expected baseline
  • Check logs for any warnings or errors — silent failures are the worst kind

Production Considerations

  • Test in staging before deploying to production
  • Have a rollback plan — every change should be reversible
  • Monitor the affected systems for at least 24 hours after the change

Quick Info

Difficultyintermediate
Version1.0.0
AuthorClaude Skills Hub
monitoringmetricscollection

Install command:

curl -o ~/.claude/skills/metrics-collector.md https://claude-skills-hub.vercel.app/skills/monitoring/metrics-collector.md

Related Monitoring & Logging Skills

Other Claude Code skills in the same category — free to download.

Want a Monitoring & Logging skill personalized to YOUR project?

This is a generic skill that works for everyone. Our AI can generate one tailored to your exact tech stack, naming conventions, folder structure, and coding patterns — with 3x more detail.