Write SPL queries for search, stats, and timechart
✓Works with OpenClaudeYou are a Splunk Search Processing Language (SPL) expert. The user wants to write SPL queries for search, stats, and timechart operations to analyze and visualize data in Splunk.
What to check first
- Verify your Splunk instance is running and you have access to the search bar (usually at
http://localhost:8000or your Splunk URL) - Run a basic search like
index=main | head 10to confirm data is indexed and searchable - Check which fields are available in your dataset using
index=main | fieldsor the field picker in the UI
Steps
- Start with a base search using
index=<indexname>to narrow down your data source, optionally addsource=<sourcename>orsourcetype=<sourcetypename>filters - Use the pipe operator
|to chain commands; SPL reads left-to-right and each pipe passes results to the next command - Apply the
searchcommand with field-value pairs likesearch status=200 user=adminto filter records after the initial index search - Use
statscommand with aggregation functions:stats count,stats sum(bytes),stats avg(response_time),stats dc(user)(distinct count) - Group stats by field using
byclause:stats count by statuscreates rows for each unique status value with counts - Chain multiple grouping fields:
stats sum(bytes) by host, statusgroups by both host and status - Use
timechartcommand to create time-series data:timechart count by statusgroups counts over time buckets (default 10-minute spans) - Control time bucket size in timechart with
timechart span=1h countfor hourly buckets, orspan=5mfor 5-minute intervals
Code
# Search: Find all failed authentication attempts in the last 24 hours
index=main sourcetype=auth status=failure earliest=-24h | search user!="system"
# Stats: Count login attempts per user with average response time
index=main sourcetype=auth earliest=-7d
| stats count as login_attempts, avg(response_time) as avg_response_ms by user
| where login_attempts > 5
| sort - login_attempts
# Timechart: HTTP traffic volume by status code over the last week, hourly buckets
index=main sourcetype=http earliest=-7d
| timechart span=1h count by status
| fillnull value=0
# Complex stats: Calculate percentiles and multiple aggregations
index=main sourcetype=app_logs earliest=-1d
| stats count, avg(latency_ms) as avg_latency, max(latency_ms) as max_latency,
perc95(latency_ms) as p95_latency, dc(session_id) as unique_sessions by app_name
| where avg_latency > 500
# Timechart with stats
Note: this example was truncated in the source. See the GitHub repo for the latest full version.
Common Pitfalls
- Treating this skill as a one-shot solution — most workflows need iteration and verification
- Skipping the verification steps — you don't know it worked until you measure
- Applying this skill without understanding the underlying problem — read the related docs first
When NOT to Use This Skill
- When a simpler manual approach would take less than 10 minutes
- On critical production systems without testing in staging first
- When you don't have permission or authorization to make these changes
How to Verify It Worked
- Run the verification steps documented above
- Compare the output against your expected baseline
- Check logs for any warnings or errors — silent failures are the worst kind
Production Considerations
- Test in staging before deploying to production
- Have a rollback plan — every change should be reversible
- Monitor the affected systems for at least 24 hours after the change
Related Splunk Skills
Other Claude Code skills in the same category — free to download.
Splunk Dashboard
Build Splunk dashboards with panels and drilldowns
Splunk Alerts
Configure Splunk alerts with throttling and actions
Splunk SPL Optimizer
Optimize slow Splunk searches for faster results and lower license usage
Splunk Alert Tuning
Tune Splunk alerts to reduce false positives without missing real incidents
Want a Splunk skill personalized to YOUR project?
This is a generic skill that works for everyone. Our AI can generate one tailored to your exact tech stack, naming conventions, folder structure, and coding patterns — with 3x more detail.