You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Smart LLM router that cuts AI costs by 40-70%. Routes simple prompts to cheap/local models, complex ones to premium. Works with Claude Code, OpenClaw, Cursor. OpenAI-compatible proxy, runs locally.
Intelligent model orchestration for Claude Code - routes queries to optimal Claude model (Haiku/Sonnet/Opus) based on complexity. It also includes many more features. If this project is working well for you and would like to support me, just help spread the word. Thanks!
Free, self-hosted AI model router. OpenRouter / ClawRouter alternative using your own API keys. 14-dimension classifier routes to the right model (Anthropic/OpenAI/Kimi) automatically. No middleman, no markup. Built for OpenClaw.
An intelligent, low-latency local LLM router that reduces AI costs by 30-70%. Uses a self-hosted classifier to automatically route prompts to the most cost-effective model without external API overhead.
An Agentic AI system for Deepfake Detection & Media Authenticity Verification. Features autonomous model routing, an ensemble of 6 specialized neural networks, and advanced bias correction mechanisms.
AI assistant that learns your workflows — auto-generates skills, routes to the cheapest LLM, and replays every execution. Telegram · Slack · Discord · Matrix. One Rust binary, zero setup.
An intelligent LLM inference gateway that dynamically routes user queries to optimal model tiers (Llama-3.1 8B/70B) based on real-time complexity, reasoning depth, and ambiguity analysis.
Cost-intelligent AI agent framework for TypeScript. Model routing, budget enforcement, multi-agent pipelines with crash recovery. Works with Claude, GPT, Gemini, and 40+ providers.
Intelligent model routing engine that analyzes customer specifications, scores complexity, and routes each request to the most cost-effective LLM. Up to 97% cost savings.
One place to define AI tasks and map tasks to models. Stop scattering ad-hoc LLM calls across the codebase and maintain visibility, control, and cost/performance by task type.