Skip to content

Latest commit

 

History

History
137 lines (91 loc) · 6.3 KB

File metadata and controls

137 lines (91 loc) · 6.3 KB

What This Machine Is

This is not a documentation system.

This is a personal knowledge operating system - a machine that externalizes your thinking, tracks your decisions, and builds institutional memory over time, with an AI as a first-class participant.


The Core Insight

Great knowledge work isn't just about capturing information. It's about:

  • Learning from patterns across decisions and experiences
  • Scaling yourself through documented thinking
  • Building memory that outlives any single conversation
  • Thinking clearly by forcing ideas into written form

Most people keep this in their head, scattered across chat threads, docs, and emails. This machine integrates it all.


What It Does

1. Externalizes Your Thinking

  • Knowledge articles capture how things work
  • Decision records capture the why, not just the what
  • Journal entries capture reflections and observations
  • The act of writing forces clarity

2. Enables Execution

  • Templates structure your work so you don't start from blank pages
  • Initiatives track multi-artifact projects in one place
  • The inbox captures everything - nothing gets lost
  • Your AI knows where everything is and can retrieve it

3. Creates Feedback Loops

  • Past decisions inform future decisions (precedent chain)
  • Patterns emerge across journal entries and knowledge articles
  • Stale content gets surfaced automatically (lint)
  • The system improves as you use it (training period)

4. Builds Institutional Memory

  • Knowledge doesn't disappear when you switch tools or forget
  • Decisions are documented with full context for future reference
  • Processes are captured once and queryable forever
  • The AI handles the maintenance - you focus on the thinking

What It Is NOT

  • A publishing platform - You can draft content here, but the system is built for thinking, not performing
  • A generic wiki - This has structure and purpose, not just pages
  • A filing cabinet - This creates connections, not just storage
  • A productivity app - This is a complement to task management, not a replacement for it

The Substrate Principle

The repo stays dumb. The intelligence lives outside it.

llm-context-base is a substrate — a pile of markdown files that any intelligent layer can read, write, and act on. Claude Code, Cursor, Cowork, an MCP server, a GitHub Action, a Cloudflare Worker — these are all intelligence layers. The repo is what they operate on, not what they run inside.

This is a deliberate boundary. It means:

  • No build step. Clone it, open it, it works. No npm install. No configuration beyond filling in your profile.
  • No lock-in. Switch AI tools and the wiki still works. The markdown doesn't care what reads it.
  • No runtime. The repo doesn't run. It exists. Sophisticated behaviors — monitoring the web, creating PRs, integrating with external services — belong in the intelligence layer above.
  • No enterprise friction. Zero runtime means zero network activity from the repo itself — no telemetry, no phone-home, no CDN, nothing for a corporate proxy or firewall to block or audit. The only network question is whether your company allows your AI tool; the framework adds no additional surface.
  • Drop and play, not install and configure. The right mental model is a canvas, not an app.

The corollary: things like hosted search, MCP servers, and automation pipelines are valid and useful — they just belong above the substrate, not in it. Build them as separate projects that wrap the repo. The repo stays simple so those layers can be anything.

When in doubt: if it requires code to run, it's not substrate. It belongs in the intelligence layer.

Two types of extension

This boundary clarifies what kind of extension you're building:

Type What it is How it installs Lives where
Markdown extension Instructions, templates, prompts — things the AI reads and acts on Point your AI at a link. No CLI, no npm. In the substrate (your repo)
Code extension A worker, webhook receiver, or automation that runs outside the repo and writes files into it Deploy the code separately (Cloudflare Worker, GitHub Action, Railway service) In the intelligence layer, above the substrate

A markdown extension adds behavior without adding runtime. A code extension adds runtime — and that runtime lives outside the repo by design. Both are valid. They serve different purposes and install differently because they are different things.


The Evolution Principle

This system is designed to evolve.

When you encounter content that doesn't have a clear home:

  1. Be flexible - restructure if needed
  2. Document the methodology as you discover it
  3. Create templates only after patterns emerge
  4. Let usage inform structure, not the other way around

The structure serves the thinking, not the other way around.


Success Metrics

This machine is working if:

  • You think more clearly - Writing forces clarity
  • You see patterns - Connections emerge across documents
  • You find things - Your AI can answer "what do we know about X?" instantly
  • You stop repeating yourself - Capture once, reference forever
  • It feels effortless - The system fades into the background after training

When to Be Concerned

  • If it becomes a chore - You're forcing structure that doesn't help
  • If it becomes inward-focused - Documenting for documentation's sake
  • If it becomes comprehensive - Trying to capture everything vs. what matters
  • If it becomes rigid - Defending structure vs. adapting to your needs

If you stop using parts of this system, delete them. Don't force it.


For AI Agents Working With This System

When asked for something that doesn't fit the current structure:

  1. Be slightly opinionated - If this doesn't serve the core purpose, say so
  2. Propose restructuring - Don't force things into wrong boxes
  3. Question direction - If this becomes a chore, call it out
  4. Maintain evolution - The system should grow and adapt, not stay frozen

Your job isn't just to execute - it's to keep this machine focused on its purpose:

Helping the user think more clearly, learn from patterns, scale themselves, and build knowledge that persists - while you handle the structure.