A job search system that runs on your computer.
Finds jobs. Scores them. Tracks your pipeline. Your data stays yours.
Pick whichever feels right. They all give you the same app.
curl -fsSL https://raw.githubusercontent.com/pleasedodisturb/kestrel/main/install.sh | bashDetects your OS, checks for Python 3.11+, installs Kestrel, and opens it in your browser.
Or if you have Node.js:
npx kestrel-appOr with Homebrew (macOS):
brew install pleasedodisturb/kestrel/kestrel
kestrel startpip install kestrel-app
kestrel startOpens your browser automatically. Data stored in ~/.kestrel/.
Requires Python 3.11+. Don't have Python? Install it from python.org/downloads (Mac/Windows installer, takes 2 minutes). Or use Option 2 or 3 below instead.
git clone https://github.com/pleasedodisturb/kestrel.git && cd kestrel
bash setup.shRequires OrbStack (recommended for Mac) or Docker Desktop (Mac/Windows). Both are free. Don't know what Docker is? The step-by-step guide explains everything.
Free with a GitHub account. Your own instance in 2 minutes. Nothing installed on your computer.
Lost? Step-by-step guide or FAQ.
Pipeline — drag applications across stages
Discovery — AI-scored job matches
Settings — connect your integrations
- Discovers jobs from multiple boards automatically (Indeed, LinkedIn, Glassdoor, Arbeitsagentur)
- Scores them against your profile with AI - stop guessing which jobs are worth applying to
- Tracks your pipeline on a Kanban board - drag applications between stages
- Prepares you for interviews - company research, mock questions, STAR story library
- Runs daily scans via GitHub Actions - wake up to a scored digest of new matches
- Works offline - Demo Mode included, zero cost to start. Add real AI when ready.
Everything runs on your machine. No account needed. No data leaves your computer (unless you connect an AI provider).
Getting started:
| Guide | What you'll learn |
|---|---|
| Quickstart | First-time setup, step by step — zero assumptions |
| FAQ | "Can I...?" "What if...?" "Why does...?" — all answered |
| Help | Something broke? Start here. We'll fix it together. |
Understanding AI in Kestrel:
| Guide | What you'll learn |
|---|---|
| How Kestrel Uses AI | The electricity analogy — what AI providers are, what they cost, and which to pick |
| AI Provider Setup | Technical details — API keys, privacy policies, provider comparison tables |
| LLM Landscape Research | Deep dive — 2026 pricing, privacy audits, GDPR, EU sovereignty (for the curious) |
How it works under the hood:
| Guide | What you'll learn |
|---|---|
| How Scoring Works | What "fit score" actually means, and how Kestrel decides which jobs match you |
| How Testing Works | 2,800+ automated checks — the kitchen analogy for quality assurance |
Going deeper:
| Guide | What you'll learn |
|---|---|
| Comparison | How Kestrel stacks up against Huntr, Teal, Simplify, and others |
| Features & API Reference | Full feature list, architecture, CLI, and API endpoints |
| Deployment | Host Kestrel on Railway, Fly.io, or your own VPS |
| Contributing | Development setup and pull request guidelines |
Kestrel works out of the box in Demo Mode — free, offline, no account needed. When you're ready for real AI-powered scoring, you have options. Think of AI providers like electricity companies: the light switch works the same no matter who supplies the power.
| Option | Cost | Privacy | Speed | Best for |
|---|---|---|---|---|
| Demo Mode | Free | Perfect | Instant | Exploring before committing |
| OpenRouter | ~$3-10/mo | Good | Varies | Most users — one key, 300+ models |
| Anthropic (Claude) | ~$3-10/mo | Excellent | ~200ms | Best quality + prompt caching savings |
| Together AI | ~$1-5/mo | Good (ZDR available) | ~213ms | Budget-friendly bulk scoring |
| Ollama | Free | Perfect | Depends on hardware | Nothing leaves your machine, ever |
Quickest path: Go to Settings → click "Connect to OpenRouter" → log in → done. No API keys to copy.
AI APIs charge per token (roughly per word). Scoring 50 jobs a day could get expensive — unless you're smart about it. Kestrel stacks several tricks that compound:
| What Kestrel does | How it helps | Savings |
|---|---|---|
| Prompt caching | Your profile is sent once, then "remembered" by the API. Scoring 50 jobs doesn't resend your CV 50 times. | 90% off input tokens |
| Response caching | Asked the same question twice? Kestrel serves it from local cache. Zero API calls, encrypted at rest. | 100% (free) |
| Token-efficient tool use | When Kestrel calls AI tools, it uses a compact format that cuts output size. | 70% off output tokens |
| Smart model selection | Not every task needs the biggest brain. Simple yes/no classification uses a smaller, cheaper model. Complex analysis uses the full thing. | 60-95% on simple tasks |
| Batch scoring | Scoring a big backlog overnight? Batch APIs give a flat 50% discount for non-urgent work. | 50% off everything |
The math: Naive approach = $15-30/month. With all optimizations = $1-5/month for the same results. Deep dive on token economics →
Don't want to think about it? Use OpenRouter. It's the universal adapter — one account gives you Claude, GPT, Gemini, and open-source models. You can always switch later.
Care about privacy? Anthropic has 7-day data retention (shortest in industry). Together AI has a one-click ZDR toggle (SOC 2 Type 2 certified). Ollama keeps everything on your machine.
On a tight budget? Together AI runs open-source models (Llama 3.3, Mixtral) on their own GPUs — no middleman markup. If you're in Europe, their Frankfurt data center means lower latency too. Great for bulk scoring where you don't need Claude-level intelligence.
Want the best of everything? Kestrel can use multiple providers at once — route simple scoring to Together (cheap), complex analysis to Anthropic (quality), and never worry about which is which.
Want to understand more? Read How Kestrel Uses AI — it explains everything in plain English, no jargon. For the full technical comparison with pricing tables and privacy audits, see the AI Provider Setup guide or the LLM landscape research.
Human-first, data-driven. Every infrastructure decision — testing, CI/CD, scoring — is backed by deep research. We investigate thoroughly, then choose the sanest path: not the most sophisticated, but the most sustainable.
Our proof is in the research artifacts. Before building anything, we run parallel research agents, synthesize findings, and publish the decision rationale so anyone can understand why things work the way they do.
| Topic | For users | For developers | Raw research |
|---|---|---|---|
| Scoring | How Scoring Works | Scoring Strategy | Raw Findings |
| Testing | How Testing Works | Testing Strategy | Raw Findings |
| CI/CD | How CI/CD Works | CI/CD Strategy | Raw Findings |
| LLM Token Costs | Quick Wins | Tools & Strategies | 52 Papers + Sources |
AGPL-3.0 — free and open source. If you modify Kestrel and offer it as a service, you must share your changes under the same license.
