$120 tested Claude codes · real before/after data · Full tier $15 one-timebuy --sheet=15 →
$Free 40-page Claude guide — setup, 120 prompt codes, MCP servers, AI agents. download --free →
clskills.sh — terminal v2.4 — 2,347 skills indexed● online
[CL]Skills_
DatabricksbeginnerNew

Databricks SQL Warehouse

Share

Query and visualize data with Databricks SQL warehouses and dashboards

Works with OpenClaude

You are a Databricks SQL expert. The user wants to query and visualize data with Databricks SQL warehouses and dashboards.

What to check first

  • Verify you have a running Databricks workspace with admin access and SQL Warehouse enabled
  • Run SELECT current_warehouse() in Databricks SQL editor to confirm active warehouse
  • Confirm you have a catalog and schema created (e.g., SHOW CATALOGS; and SHOW SCHEMAS IN <catalog>;)

Steps

  1. Navigate to Databricks workspace and click SQL Warehouses in the left sidebar; select or create a SQL Warehouse with appropriate compute size
  2. Open the SQL Editor and connect to your warehouse using the dropdown at the top-right of the query window
  3. Create or select a table to query using USE <catalog>.<schema>; then SELECT * FROM <table_name> LIMIT 10;
  4. Write your analytical query using standard SQL syntax; Databricks supports window functions, CTEs, and joins across Unity Catalog tables
  5. Click Execute (or Ctrl+Enter) to run the query and view results in the grid below
  6. Click the Visualization tab below results; select chart type (line, bar, scatter, etc.) and configure X/Y axes
  7. Customize the visualization: add title, configure aggregations, set filters using the gear icon
  8. Click Save as dashboard or add to existing dashboard by selecting the dashboard name and specifying refresh schedule (Manual, 1 hour, 6 hours, 24 hours)

Code

# Connect to Databricks SQL Warehouse and query data programmatically
from databricks import sql

# Initialize connection to SQL Warehouse
connection = sql.connect(
    server_hostname="<workspace-instance>.cloud.databricks.com",
    http_path="/sql/1.0/warehouses/<warehouse-id>",
    auth_type="pat",
    personal_access_token="<your-pat-token>"
)

# Create cursor and execute query
cursor = connection.cursor()

# Set catalog and schema
cursor.execute("USE catalog_name.schema_name")

# Query data with aggregation
query = """
SELECT 
    DATE_TRUNC('day', order_date) AS order_day,
    product_category,
    SUM(order_amount) AS total_revenue,
    COUNT(DISTINCT customer_id) AS unique_customers,
    AVG(order_amount) AS avg_order_value
FROM orders
WHERE order_date >= CURRENT_DATE - INTERVAL 90 DAY
GROUP BY DATE_TRUNC('day', order_date), product_category
ORDER BY order_day DESC, total_revenue DESC
"""

cursor.execute(query)
results = cursor.fetchall()

# Process results into visualizable format
import pandas as pd
df = pd.DataFrame(results, columns=[desc[0] for desc in cursor.description])
print(df.head(20))

# Close connection
cursor.close()

Note: this example was truncated in the source. See the GitHub repo for the latest full version.

Common Pitfalls

  • Treating this skill as a one-shot solution — most workflows need iteration and verification
  • Skipping the verification steps — you don't know it worked until you measure
  • Applying this skill without understanding the underlying problem — read the related docs first

When NOT to Use This Skill

  • When a simpler manual approach would take less than 10 minutes
  • On critical production systems without testing in staging first
  • When you don't have permission or authorization to make these changes

How to Verify It Worked

  • Run the verification steps documented above
  • Compare the output against your expected baseline
  • Check logs for any warnings or errors — silent failures are the worst kind

Production Considerations

  • Test in staging before deploying to production
  • Have a rollback plan — every change should be reversible
  • Monitor the affected systems for at least 24 hours after the change

Quick Info

CategoryDatabricks
Difficultybeginner
Version1.0.0
AuthorClaude Skills Hub
databrickssqlwarehouse

Install command:

curl -o ~/.claude/skills/databricks-sql.md https://clskills.in/skills/databricks/databricks-sql.md

Related Databricks Skills

Other Claude Code skills in the same category — free to download.

Want a Databricks skill personalized to YOUR project?

This is a generic skill that works for everyone. Our AI can generate one tailored to your exact tech stack, naming conventions, folder structure, and coding patterns — with 3x more detail.