Skip to content

dkam/splat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

138 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Use use the oauth

Splat - Lightweight Error Tracker

Splat is a simple, error tracking service inspired by GlitchTip. It provides a lightweight alternative to Sentry for applications that need fast, reliable error monitoring.

This app has zero authentication by default but supports OIDC.

It has an (awesome) MCP endpoint. You need to set an environment variable MCP_AUTH_TOKEN in order to use it. The end point is /mcp.

A large percentage of it is written with GLM 4.6 and Sonnet 4.5. It was partly an experiement in using SQLite extensively in a write heavy service. It's performed well enough for my use case.

I've only used Splat with Rails, but there's no reason it shouldn't work with other systems. Happy to accept pull requests for wider compatibility.

If you're looking for other Sentry clones, take a look at Glitchtip, Bugsink & Telebugs.

Overview

Named after Ruby's splat operator and the satisfying sound of squashing bugs, Splat accepts Sentry-compatible error events and transaction data via a simple API endpoint, processes them asynchronously, and presents them in a clean, fast web interface.

Key Features

  • Sentry Protocol Compatible - Drop-in replacement for Sentry client SDKs
  • Single-Tenant Design - Simple setup, no user management overhead
  • Fast Ingestion - Errors appear in the UI within seconds
  • Performance Monitoring - Transaction data with lightweight metrics
  • Endpoint Impact Ranking - Surfaces controllers ranked by total time spent (avg × count) plus p95, so you optimise where it actually pays back
  • N+1 Query Detection - Mines measurements.query_analysis from the transaction span analyzer, ranks endpoints by N+1 prevalence, exposes a dedicated worklist and an MCP tool
  • Release Tracking - Stamps issues with first/last seen release, overlays deploy markers on issue sparklines so regressions are visible at a glance
  • Span Waterfall - Per-transaction span tree stored in DuckLake (columnar, compressed) and rendered as a tiered timeline on the transaction detail page
  • OLTP + OLAP storage - SQLite for ingestion and OLTP, DuckLake (DuckDB + parquet) for analytics over long retention
  • MCP Integration - Query errors via Claude and AI assistants
  • Minimal Dependencies - Rails + SQLite + DuckDB + Solid Queue
  • Auto-Cleanup - Configurable data retention (default 90 days)

Why Splat?

When you need error tracking that:

  • Your code assistant can grab issues and stack traces from
  • Shows errors within seconds
  • Can be understood and modified in one sitting
  • Rails 8 / Ruby 3.4.6 / SQLite (OLTP) + DuckLake (OLAP) + Solid stack (Queue/Cache/Cable)

Screenshots

1. Projects Dashboard

Projects Dashboard

2. Project Detail View

Project Detail

3. Issues List

Issues List

4. Issue Detail with Stack Trace

Issue Detail

5. Event Details

Event Details

6. Performance Monitoring

Performance Monitoring

Getting Started

Prerequisites

  • Ruby 3.4.6+
  • Rails 8+
  • SQLite3

Installation

git clone <repository-url>
cd splat
bundle install
bin/rails db:prepare
bin/dev

Configuration

Email Notifications

Configure SMTP settings for email notifications when issues are created or reopened:

# Required settings
SMTP_ADDRESS=smtp.gmail.com
SMTP_PORT=587
[email protected]
SMTP_PASSWORD=your-app-password

# Optional settings (with defaults shown)
SMTP_DOMAIN=localhost
SMTP_AUTHENTICATION=plain
SMTP_STARTTLS_AUTO=true
SPLAT_HOST=splat.example.com
SPLAT_INTERNAL_HOST=100.x.x.x:3030  # Your Tailscale IP maybe? Used for displaying alternate DSN

# For local development with self-signed certificates, use:
SMTP_OPENSSL_VERIFY_MODE=none

# Email recipients
[email protected],[email protected]
[email protected]

Email Notification Control

# Enable email notifications in development
SPLAT_EMAIL_NOTIFICATIONS=true

# In production, emails are sent by default

Deployment

Docker Compose

x-common-variables: &common-variables
  RAILS_ENV: production
  SECRET_KEY_BASE: ${SECRET_KEY_BASE}
  SPLAT_HOST: ${SPLAT_HOST}
  SPLAT_ADMIN_EMAILS: ${SPLAT_ADMIN_EMAILS}
  SPLAT_EMAIL_FROM: ${SPLAT_EMAIL_FROM}

  SMTP_ADDRESS: ${SMTP_ADDRESS}
  SMTP_PORT: ${SMTP_PORT}
  SMTP_USER_NAME: ${SMTP_USER_NAME}
  SMTP_PASSWORD: ${SMTP_PASSWORD}

  # MCP Authentication Token
  MCP_AUTH_TOKEN: ${MCP_AUTH_TOKEN}

services:
  splat:
    image: reg.tbdb.info/splat:latest
    environment:
      <<: *common-variables
      MISSION_CONTROL_USERNAME: ${MISSION_CONTROL_USERNAME}
      MISSION_CONTROL_PASSWORD: ${MISSION_CONTROL_PASSWORD}
    volumes:
      - /storage/splat/storage:/rails/storage
      - /storage/splat/logs/web:/rails/log
    ports:
      - "${HOST_IP}:3030:3000"
    restart: unless-stopped
    logging:
      driver: "json-file"
      options:
        max-size: "100m"
        max-file: "5"

  jobs:
    image: reg.tbdb.info/splat:latest
    environment:
      <<: *common-variables
      SOLID_QUEUE_THREADS: 3
      SOLID_QUEUE_PROCESSES: 1
    volumes:
      - /storage/splat/storage:/rails/storage
      - /storage/splat/logs/jobs:/rails/log
    command: bundle exec bin/jobs
    restart: unless-stopped
    logging:
      driver: "json-file"
      options:
        max-size: "100m"
        max-file: "3"

Authentication

  1. None: Anyone can access splat - ensure it's running internal / within a VPN
  2. Basic Auth: Use your webserver to implement Basic Auth, avoiding protecting /api/ and /mcp/ endpoints as they're already authenticated
  3. OIDC: Set the

Basic Auth

Assuming a Caddy server which forwards traffic to Splat. The following configuration uses basic auth, but allows free access to the /api/ and /mcp/ endpoints.

splat.booko.info {
  encode zstd gzip

  # Handle /api/* and /mcp/* routes without basic auth (both use token auth)
  handle /api/* /mcp* {
    reverse_proxy * {
      to http://<ip address>:3030
    }
  }

  # Handle all other routes with basic auth
  handle {
    basicauth {
      <user> <basic-auth-hash>
    }
    reverse_proxy * {
      to http://<ip address>:3030
    }
  }

  log {
    output file /var/log/caddy/splat.log
  }
}

Generate the basic auth hash with docker compose exec -it caddy caddy hash-password

OIDC

Splat supports OIDC

OIDC_CLIENT_ID=<OIDC CLIENT ID>
OIDC_CLIENT_SECRET=<OIDC CLIENT SECRET>
OIDC_DISCOVERY_URL=<OIDC DISCOVERY URL>

SPLAT_ALLOWED_USERS="Comma seperated list of email addresses allowed access Splat"
SPLAT_ALLOWED_DOMAINS="Comma seperated list of email domains allowed access Splat"

# Optional
OIDC_PROVIDER_NAME=<OIDC Providername>
OIDC_REQUIRE_PKCE=<true/false>

Performance

Splat has been tested in production handling real-world traffic with excellent results.

Production Metrics

Sustained load: ~1,550 transactions/minute (~26 tx/s)

  • Web container: 1.07 GB RAM, ~19% CPU
  • Jobs container: 340 MB RAM, ~20% CPU
  • Queue depth: 0 (no backlog)
  • No database locks or contention

Total resources: ~1.4 GB RAM, ~0.8 CPU cores for both containers combined.

Throughput: ~2.2 million transactions/day

SQLite Performance

At 26 transactions/second sustained with ~950k transactions in database (4.7GB):

  • ✅ No SQLITE_BUSY errors
  • ✅ No write conflicts
  • ✅ Linear CPU scaling with load
  • ✅ Stable memory usage (plateaus around 1GB for web container)
  • ✅ Memory remains stable as throughput increases (tested 14-26 tx/s)
  • ✅ Database size has no impact on ingestion performance ( so far )

Rails 8.1's SQLite optimizations (WAL mode, connection pooling) handle write-heavy workloads efficiently.

Storage Architecture: OLTP + OLAP

Splat stores every event and transaction in two places, each chosen for what it's good at:

  • SQLite (OLTP) — source of truth for ingestion, find-by-id lookups, status changes, and the recent-firehose views. Fast, embedded, no operational overhead. SQLite retention can be aggressive (e.g. 14–30 days) without losing analytical history.
  • DuckLake (OLAP) — every event/transaction is mirror-written to a DuckDB-managed parquet store on local disk or S3. All time-windowed aggregates — endpoint stats, percentile breakdowns, response-time charts, the "Top Endpoints by Impact" table, even the project dashboard's 24-hour counts — read from DuckLake. Columnar scans over parquet are dramatically faster than row scans for these queries, and retention can be set independently and longer than SQLite (e.g. months of history at low storage cost).

The two stores have separate retention jobs, so you can keep weeks of OLTP detail and months of OLAP history without bloating either one.

Endpoint Impact Ranking

The performance dashboard ranks endpoints by time spent (avg_duration × count) rather than average duration. A 50ms endpoint hit 10,000×/day costs more total time than a 2s endpoint hit 5×/day — sorting by impact tells you where optimisation actually pays back. The same table also surfaces P95 alongside, so a tail-heavy endpoint isn't hidden by a low average.

Span Storage and SQL Normalization

Each transaction's span tree is stored in DuckLake (columnar parquet) and rendered as a waterfall on the transaction detail page. Spans are 10–100× the volume of transactions, but DuckLake's per-column dictionary + RLE compression — combined with one ingest-time pass — keeps storage manageable:

  • SQL normalization at ingest: span descriptions like SELECT * FROM users WHERE id = 42 AND email = '[email protected]' are rewritten to SELECT * FROM users WHERE id = ? AND email = ? before being written to disk. The parameterized form dictionary-encodes to ~2 bytes per row instead of 500+.
  • Privacy bonus: because literals never reach disk, user IDs, email addresses in WHERE clauses, names in INSERTs, and tokens in URL paths are automatically redacted. We can show you the pattern of the offending query, but not the literal values that triggered it. This is a deliberate trade-off — and a documented commitment, not an accident.
  • Cap and retention: spans are capped at 1000 per transaction (excess dropped, transaction flagged) and retained for 30 days by default (configurable separately from transactions, since span volume is far higher).

Maintenance

Data Retention and Cleanup

Splat automatically cleans up old data to manage database size and maintain performance.

Default Retention Periods

  • Events/Issues: 90 days (configurable via SPLAT_MAX_EVENT_LIFE_DAYS)
  • Transactions: 90 days (configurable via SPLAT_MAX_TRANSACTION_EVENT_LIFE_DAYS)
  • Files: 90 days (configurable via SPLAT_MAX_FILE_LIFE_DAYS)

Cleanup Process

  • Schedule: Daily at 2:00 AM UTC via Solid Queue recurring jobs
  • Actions:
    • Deletes events older than retention period
    • Deletes transactions older than retention period
    • Removes empty issues (issues with no associated events)
    • Logs cleanup statistics to Rails logger

Configuration

Override default retention periods with environment variables:

# Keep events for 30 days instead of 90
SPLAT_MAX_EVENT_LIFE_DAYS=30

# Keep transactions for 60 days
SPLAT_MAX_TRANSACTION_EVENT_LIFE_DAYS=60

# Keep files for 180 days
SPLAT_MAX_FILE_LIFE_DAYS=180

Manual Cleanup

To run cleanup manually:

# Run cleanup with default settings
bin/rails runner "CleanupEventsJob.new.perform"

# Run cleanup with custom retention periods
SPLAT_MAX_EVENT_LIFE_DAYS=30 bin/rails runner "CleanupEventsJob.new.perform"

Monitoring

Check cleanup activity in Rails logs:

tail -f log/production.log | grep "Cleanup"

Example log output:

Started cleanup: events=90d, transactions=90d, files=90d
Deleted 1,234 old events
Deleted 567 old transactions
Deleted 89 empty issues
Cleanup completed successfully

Monitoring

Splat provides a /_health endpoint for monitoring service status and queue depth.

Health Endpoint

GET /_health

Response:

{
  "status": "ok",
  "timestamp": "2025-10-23T12:34:56Z",
  "queue_depth": 0,
  "queue_status": "healthy",
  "event_count": 1234,
  "issue_count": 56,
  "transaction_count": 5678,
  "transactions_per_second": 1.23,
  "transactions_per_minute": 73.5
}

Response Fields:

  • status: Overall system health (ok or degraded)
  • queue_status: Queue health (healthy, warning, or critical)
  • queue_depth: Number of pending background jobs
  • timestamp: Current server time (ISO 8601)
  • event_count: Total error events tracked
  • issue_count: Number of open issues
  • transaction_count: Total performance transactions
  • transactions_per_second: Rate over last minute
  • transactions_per_minute: Rate over last hour

Environment Variables for Thresholds:

# Optional - defaults shown
QUEUE_WARNING_THRESHOLD=50   # queue_status becomes "warning"
QUEUE_CRITICAL_THRESHOLD=100 # queue_status becomes "critical", status becomes "degraded"

Uptime Kuma Setup

Monitor Configuration:

  • Monitor Type: HTTP(s) - JSON Query
  • URL: https://splat.yourdomain.com/_health
  • Expected Status Code: 200
  • Check Interval: 60 seconds (or your preference)

Option 1: Monitor Queue Status (Recommended)

  • JSON Path: $.queue_status
  • Expected Value: healthy
  • Alert When: Value is not equal to expected value
  • Result: Alerts when queue is "warning" or "critical"

Option 2: Monitor Overall Status

  • JSON Path: $.status
  • Expected Value: ok
  • Alert When: Value is not equal to expected value
  • Result: Alerts only when system is "degraded" (critical queue depth)

Notification Settings: Configure Uptime Kuma to send alerts via:

  • Email
  • Slack
  • Discord
  • Webhook
  • Or any of the 90+ notification services supported

Monitoring Guidelines:

  • Normal queue depth: 0-10 jobs (instant processing)
  • Warning level: 50-99 jobs (queue building up)
  • Critical level: 100+ jobs (queue backlog)

When queue_status is "warning":

  • Jobs are processing but slower than ingestion rate
  • Check Solid Queue worker status
  • Consider scaling workers if sustained

When queue_status is "critical":

  • Significant backlog, data delayed
  • Immediate investigation needed
  • Check for worker crashes or resource constraints

Backup

Splat uses SQLite databases. Two recommended backup strategies:

Litestream - Continuous replication to S3-compatible storage

  • Real-time backup with ~10-30 second lag
  • Supports AWS S3, Backblaze B2, Cloudflare R2, MinIO
  • Point-in-time recovery

sqlite3_rsync - Efficient incremental backups

  • Creates byte-for-byte clones of live databases
  • Works while database is in use
  • Smaller incremental transfers than full copies

What to Backup

  • storage/production.sqlite3 - Events, issues, transactions (critical)
  • storage/production_queue.sqlite3 - Background jobs (recommended)
  • storage/production_cache.sqlite3 - Performance counters (optional)

Model Context Protocol (MCP) Integration

Splat exposes an MCP server that allows Claude and other AI assistants to query error tracking and performance data directly. As Splat has no authentication system, we'll use an environment set value for an authentication token.

Setup

1. Generate an authentication token:

# Using OpenSSL
openssl rand -hex 32

# Or using Ruby
ruby -r securerandom -e 'puts SecureRandom.hex(32)'

2. Add to your environment:

# .env
MCP_AUTH_TOKEN=your-generated-token-here

3. Configure Claude Desktop:

Note: Claude Desktop currently only supports stdio transport (not HTTP). To use Splat's MCP server with Claude Desktop, you'll need to create a proxy script.

Create a file at ~/splat-mcp-proxy.sh:

#!/bin/bash
# Proxy for Splat MCP over stdio -> HTTP
# Replace TOKEN with your actual MCP_AUTH_TOKEN

while IFS= read -r line; do
  echo "$line" | curl -s -X POST http://localhost:3030/mcp \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer YOUR_TOKEN_HERE" \
    -d @-
done

Make it executable:

chmod +x ~/splat-mcp-proxy.sh

Edit ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "splat": {
      "command": "/Users/YOUR_USERNAME/splat-mcp-proxy.sh",
      "transport": {
        "type": "stdio"
      }
    }
  }
}

Alternative: Use from Claude Code (supports HTTP):

You can use command line like:

`claude mcp add --transport http splat http://localhost:3030/mcp --header "Authorization: Bearer your-generated-token-here"

Claude Code (VS Code extension) supports HTTP transport. In your workspace, you can connect directly:

{
  "mcpServers": {
    "splat": {
      "url": "http://localhost:3030/mcp",
      "transport": {
        "type": "http",
        "headers": {
          "Authorization": "Bearer your-generated-token-here"
        }
      }
    }
  }
}

4. Restart Claude Desktop or VS Code

Available MCP Tools

Issue Management:

  • list_recent_issues - List recent issues by status
  • search_issues - Search by keyword, exception type, or status
  • get_issue - Get detailed issue with stack trace
  • get_issue_events - List event occurrences for an issue
  • get_event - Get full event details (request ID, breadcrumbs, context)
  • resolve_issue / ignore_issue / reopen_issue - Lifecycle transitions

Performance Monitoring:

  • get_transaction_stats - Overall percentiles plus top endpoints ranked by total time spent (avg × count)
  • get_endpoint_summary - Per-endpoint percentiles (overall + DB + view) with fastest/slowest sample requests
  • get_endpoint_timeseries - Bucketed count + p50/p95/p99 for one endpoint over a time range — built for spotting regressions ("did p95 jump after the 14:00 deploy?")
  • find_n_plus_one_endpoints - Endpoints ranked by N+1 prevalence (% of transactions affected, avg/max queries per request) so you find the worst offenders quickly
  • search_slow_transactions - Find slow individual requests
  • get_transactions_by_endpoint - List recent transactions for one endpoint
  • compare_endpoint_performance - Before/after percentile comparison around a release or timestamp
  • get_transaction - Get detailed transaction breakdown

Usage Examples

Once configured, you can ask Claude:

  • "List recent open issues in Splat"
  • "Search for NoMethodError issues in production"
  • "Where is the booko app spending its time?" → impact-ranked top endpoints
  • "Which endpoints have N+1 query problems?" → ranked worklist
  • "Show the p95 of UsersController#show over the last 7 days, hourly"
  • "Compare AlertsController#index performance before and after release v1.42.0"

Full List of Environment Variables

  RAILS_ENV: production
  SECRET_KEY_BASE : generate with `openssl rand -hex 64`
  HOST_IP: ip address to bind to
  PORT: 3000
  SPLAT_DOMAIN: https://splat.example.com # Change this to your domain
  FROM_EMAIL: [email protected] # Change this to your email
  SOLID_QUEUE_THREADS: 3
  SOLID_QUEUE_PROCESSES: 1

MCP (Model Context Protocol)

MCP_AUTH_TOKEN: Generate with `openssl rand -hex 32`

Data Retention

SPLAT_MAX_EVENT_LIFE_DAYS=30
SPLAT_MAX_TRANSACTION_EVENT_LIFE_DAYS=60
SPLAT_MAX_FILE_LIFE_DAYS=180

Mission Control

Optionally set these if you'd like to access /jobs to view the SolidQueue management system
  MISSION_CONTROL_USERNAME
  MISSION_CONTROL_PASSWORD

OIDC Authentication Setup

Splat supports OpenID Connect (OIDC) authentication with automatic discovery URL configuration. This replaces the basic auth setup with proper user authentication.

Quick Start with Discovery URLs

The preferred method is using OIDC discovery URLs - just set 3 environment variables:

# Required for OIDC authentication (app automatically adds .well-known path)
OIDC_DISCOVERY_URL=https://your-provider.com
OIDC_CLIENT_ID=your-client-id
OIDC_CLIENT_SECRET=your-client-secret
OIDC_PROVIDER_NAME=Your Provider Name  # Optional: Display name for login button

Important: Configure your OIDC provider with the callback URL: https://your-splat-domain.com/auth/callback

Provider-Specific Examples

Google:

OIDC_DISCOVERY_URL=https://accounts.google.com/.well-known/openid_configuration
OIDC_CLIENT_ID=your-google-client-id
OIDC_CLIENT_SECRET=your-google-client-secret
OIDC_PROVIDER_NAME=Google

Okta:

OIDC_DISCOVERY_URL=https://your-domain.okta.com/.well-known/openid_configuration
OIDC_CLIENT_ID=your-okta-client-id
OIDC_CLIENT_SECRET=your-okta-client-secret
OIDC_PROVIDER_NAME=Okta

Auth0:

OIDC_DISCOVERY_URL=https://your-domain.auth0.com/.well-known/openid_configuration
OIDC_CLIENT_ID=your-auth0-client-id
OIDC_CLIENT_SECRET=your-auth0-client-secret
OIDC_PROVIDER_NAME=Auth0

Microsoft Azure AD:

OIDC_DISCOVERY_URL=https://login.microsoftonline.com/your-tenant-id/v2.0/.well-known/openid_configuration
OIDC_CLIENT_ID=your-azure-client-id
OIDC_CLIENT_SECRET=your-azure-client-secret
OIDC_PROVIDER_NAME=Microsoft

Manual Endpoint Configuration

If your provider doesn't support discovery URLs, configure endpoints individually:

# Required OIDC settings
OIDC_CLIENT_ID=your-client-id
OIDC_CLIENT_SECRET=your-client-secret
OIDC_AUTH_ENDPOINT=https://your-provider.com/oauth/authorize
OIDC_TOKEN_ENDPOINT=https://your-provider.com/oauth/token
OIDC_USERINFO_ENDPOINT=https://your-provider.com/oauth/userinfo
OIDC_JWKS_ENDPOINT=https://your-provider.com/.well-known/jwks.json
OIDC_PROVIDER_NAME=Your Provider

How It Works

  1. Discovery: The app automatically fetches OIDC configuration from your provider's discovery URL
  2. Authentication: Users are redirected to your OIDC provider for login
  3. Token Storage: JWT tokens are encrypted and stored in secure HTTP-only cookies
  4. Auto-Refresh: Tokens are automatically refreshed when needed (5 minutes before expiry)
  5. Session Migration: Existing sessions are automatically migrated to encrypted cookies

Security Features

  • Encrypted Cookies: JWT tokens are encrypted using Rails message verifier
  • HTTP-Only Cookies: Tokens not accessible via JavaScript
  • SameSite=Strict: Protection against CSRF attacks
  • JWT Verification: Optional token signature validation
  • Automatic Cleanup: Tokens cleared on logout or expiry

Email Sending Setup

  https://guides.rubyonrails.org/action_mailer_basics.html#action-mailer-configuration
  https://guides.rubyonrails.org/configuring.html#configuring-action-mailer

  SMTP_ADDRESS - default 'localhost'
  SMTP_PORT - default 587
  SMTP_DOMAIN' - default 'localhost' Some providers require it match a verified domain.
  SMTP_USER_NAME' - default nil
  SMTP_PASSWORD' - default nil
  SMTP_AUTHENTICATION' - default 'plain'
  SMTP_STARTTLS_AUTO' - default 'true'
  SMTP_OPENSSL_VERIFY_MODE - default'none').to_sym

Development

Services

  • Solid Queue: Background job processing (bin/jobs)
  • Solid Cache: In-memory caching
  • Solid Cable: Real-time updates (optional)

Email Previews

View email templates at http://localhost:3000/rails/mailers

About

An experimental, lightweight, self-hosted exception and performance monitoring tool built with Rails and SQLite. Inspired by Sentry/Glitchtip but optimsed for simplicity. Features live updates, MCP integration for AI agents to query production data, and efficient trace storage. Built in a weekend.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages