Ultra-fast token & cost tracker for LLM Token Usage (e.g. Claude Code)
-
Updated
Apr 20, 2026 - Rust
Ultra-fast token & cost tracker for LLM Token Usage (e.g. Claude Code)
The one dashboard you’ve been looking for — track spend and usage across Claude, Cursor, OpenRouter, Copilot, Gemini, Codex, and more.
Track, visualize, and optimize LLM API spending. Monitor OpenAI & Anthropic costs per feature, detect waste, suggest savings. Zero-config Python profiler.
Free AI API cost calculator SDK for TypeScript and Python with verified, continuously updated model pricing.
AI Image Generation Cost Analysis
LLM cost monitoring and optimization toolkit
Zero-intrusion guard for LLM calls in dev: dedupe, cache, and protect AI requests across Node, browser, and Vite.
Open-source LLM FinOps proxy — track OpenAI, Anthropic (Claude), and Google Gemini costs by feature, team, and customer. Zero code changes. pip install burnlens.
TTDash is a local-first dashboard and CLI for toktrack usage data
Open-source LLM FinOps proxy — track OpenAI, Anthropic (Claude), and Google Gemini costs by feature, team, and customer. Zero code changes. pip install burnlens.
Local-first desktop app to track token usage and AI costs from Claude Code, Codex CLI, and OpenCode logs.
Self-hosted AI token usage dashboard for OpenClaw · Claude · Codex · Gemini
TokenFence blog — articles on AI agent cost control and budget management
Add a description, image, and links to the ai-cost topic page so that developers can more easily learn about it.
To associate your repository with the ai-cost topic, visit your repo's landing page and select "manage topics."