AI-powered product wisdom platform built as a monorepo.
backend/- FastAPI service scaffoldfrontend/- Vite + Tailwind app scaffolddata-pipeline/- ingestion, schema migrations, and verification scriptsdata/lennys-newsletterpodcastdata/- canonical corpus directory
- Copy env:
cp .env.example .env
- Install Python deps:
make setup
- Normalize dataset location:
make normalize-data
- Apply database schema:
make migrate
- Run ingestion:
make ingest
- Verify ingest:
make verify
- API server:
make run-api - Frontend dev server:
cd frontend && npm run dev - Dry-run ingest (no writes):
make ingest-dry-run - Limited ingest dry-run (first N docs):
make ingest-dry-run-limit LIMIT=10 - Limited ingest (first N docs):
make ingest-limit LIMIT=10 - Tests:
make test
- Production uses a single container (
ghcr.io/<owner>/lennyverse) that serves:- FastAPI API endpoints under
/api/* - Built frontend assets from
frontend/dist
- FastAPI API endpoints under
- This same-origin setup removes cross-origin calls in production and avoids CORS deployment issues.
- Malformed frontmatter: parser logs warnings and skips bad files; re-run after fixing source files.
- Supabase paused / cold start: retry
make migrateormake ingestafter project wake-up in dashboard. - Embedding endpoint timeout: ensure Ollama/NIM is reachable at
OLLAMA_EMBED_BASE_URL, then retry ingest.