Releases: asakin/llm-context-base
v1.1.0 - The Schema Gets Real
v1.1.0 — The Schema Gets Real
Everything in llm-context-base now runs on YAML frontmatter. The metadata standard that was designed to make every file queryable without reading it — it's now a proper schema, not a loose convention.
This release also ships the features that emerged from real daily use: an onboarding system that teaches you the tool as you use it, a lint system that knows when "archived" pages shouldn't be flagged as stale, and a tools manifest so you can declare what your wiki talks to without installing anything upfront.
Headline: YAML frontmatter migration
All metadata blocks are now YAML frontmatter (--- fences at the top of every file). This is the one change that affects existing users — if you started on v1.0.0, your files use the old bullet-list format. Both work, but new files will use YAML, and the lint system expects it going forward.
New features
- Onboarding tip system — contextual 💡 hints during training and cooldown phases. The AI picks tips based on what you haven't tried yet. Shown at session start and inline when relevant.
- Tools manifest — declare the tools your wiki integrates with (Obsidian, MCP servers, export targets) in
_config/tools.md. The AI offers to install them on first session. - Project Index —
1-Projects/README.mdnow auto-maintains a table of all projects with status, summary, and last-updated date. - Knowledge ops log — append-only log at
2-Knowledge/log.mdtracks every ingest, query, lint, and edit. Provenance for your wiki. - Confidence field + superseded status — metadata standard now supports
confidence:(how sure you are) andstatus: supersededwith a requiredfailure_reason. - Lint staleness exemption — pages with terminal statuses (archived, superseded, cancelled) are no longer flagged as stale. Finally.
- Conversation Tone section —
_config/config.mdnow has a dedicated section for controlling how the AI talks to you vs. how it writes files. - Folder-per-project model — projects can now be a folder with an
_overview.mdand supporting files, not just a single page.
Docs
- LLM Wiki pattern comparison — how llm-context-base compares to Karpathy's original gist and other implementations
- Chat LLM support — how to use Claude Chat or ChatGPT with your wiki via GitHub sync
- Obsidian setup rewrite — Minimal theme, appearance config, graph settings, Terminal plugin
- Journal sync and privacy — options for keeping
3-Journal/private while syncing everything else - Claude Code setup guide — starter
settings.jsonand recommended permissions
Fixes
docs/and_meta/files no longer require the metadata standard (they're framework files, not wiki content)- Framework mode detection uses a sentinel file instead of fragile git-remote checks
- Training footer formatting cleaned up, questions numbered, meta-questions about the system banned
- Obsidian defaults to reading mode with hidden inline titles
Full Changelog: v1.0.0...v1.1.0
v1.0.0 - Foundation
llm-context-base v1.0.0 - Foundation
The first stable release. No new features, just the existing system made solid and documented for public use.
What this is
A git template for building your own LLM-powered personal knowledge base, built on Andrej Karpathy's LLM Wiki pattern. The difference from a bare wiki: it learns how you work.
What's included
- Two-section README - personal section maintained by your AI, framework docs tracked cleanly upstream
- Metadata standard - every file gets a queryable header so your AI scans summaries, not full docs
- Training period - 30-day adaptive phase where the AI asks questions, suggests structure, and logs what it learns. Goes quiet after.
- Inbox-first capture - everything lands in
_inbox/first. You never hesitate to capture because you don't have to decide where it goes. - Instruction modules - lint, knowledge query, write, and definition-of-done behaviors, loaded just-in-time
- Templates - decision records, knowledge articles, project briefs
Supported tools
Claude Code, Cursor, Copilot, Windsurf, and anything that reads a CLAUDE.md or system prompt from a folder.
Get started
Click Use this template, clone it, fill in _config/config.md, and start talking to your AI.
Apache 2.0 - Built by @asakin