Smart LLM Routing for OpenClaw. Cut Costs up to 70% 🦞🦚
-
Updated
Apr 4, 2026 - TypeScript
Smart LLM Routing for OpenClaw. Cut Costs up to 70% 🦞🦚
Transform your Claude Code terminal with atomic precision statusline. Features flexible layouts, real-time cost tracking, MCP monitoring, prayer times, and beautiful themes.
A beautiful, zero-dependency command center for OpenClaw AI agents
htop for your Claude Code sessions — real-time cost, cache efficiency, model comparison, and smart alerts
Comprehensive token usage analysis and cost tracking for opencode sessions
Route inference across LLM providers. Track cost per request.
Ultra-fast token & cost tracker for LLM Token Usage (e.g. Claude Code)
Deep analytics and session insights for your Claude Code usage
htop for your AI costs — real-time terminal monitoring of LLM token usage and spending across providers and coding agents
Describe what you want. Go home. Sandcastle ships it. 6 AI providers, EU data residency, smart failover, cost intelligence, 20 step types, 236 templates. European-built, open source. pip install sandcastle-ai
10 Claude sessions running. What are they doing? Live dashboard — monitor, cost tracking, search, sub-agent visibility.
Supercharge your Claude Code CLI experience with a powerful statusline that displays key session metrics (Git state, context usage, model info and cost) at a glance — minimal setup, maximum insight.
Self-hosted LLM gateway that routes requests across AI providers (OpenAI, Anthropic, Gemini, Mistral, Ollama) using intelligent multi-policy scoring — including an LLM-native routing policy. Drop-in compatible: just swap the base URL. No database required, built-in cost tracking, budget enforcement and multi-tenant isolation.
Real-time cost tracking and budget management for Claude Code. Zero setup. No Docker. No Grafana. Just install and go.
A lightweight, per-project tracking cost & quota statusline for Claude Code with zero dependencies.
Enhanced statusline for Claude Code — see your 7d/30d spend at a glance
Python decorators to accurately monitor LLM API cost & latency for OpenAI, Anthropic, and other leading AI models
Claude Code Telemetry is a lightweight bridge that captures telemetry data from Claude Code and forwards it to Langfuse for visualization, with a secure local turnkey installation under a minute.
Know which LLM model to use, what it costs, and when accuracy drops. Companion gem for ruby_llm.
Real-time token tracking, cost forecasting, and rate limit prediction for the Anthropic Claude API. Self-hosted, open source, free forever.
Add a description, image, and links to the cost-tracking topic page so that developers can more easily learn about it.
To associate your repository with the cost-tracking topic, visit your repo's landing page and select "manage topics."