ob1-getting-started.mp4
A database that stores your thoughts with vector embeddings, plus an MCP server that lets any AI assistant search and write to your brain. No Slack required — capture happens from whatever AI tool you're already using (Claude Desktop, ChatGPT, Claude Code, Cursor).
Using a terminal-based AI tool (Claude Code, Codex, Gemini CLI)? You can skip MCP entirely and use the lightweight
obCLI instead. See the CLI-Direct Approach guide and theobCLI tool. You still need Steps 1–4 below (database + OpenRouter), but you can skip Steps 5–7 (Edge Function and MCP).
About 30 minutes and zero coding experience. You'll copy and paste everything.
- Supabase — Your database — stores everything
- OpenRouter — Your AI gateway — understands everything
Follow this guide step by step — it's designed to get you through without outside help. If you'd rather have an AI coding tool walk you through it, check out the AI-Assisted Setup guide — it covers how to use Cursor, Claude Code, or similar tools to build alongside this guide.
If something goes sideways, Supabase has a free built-in AI assistant in every project dashboard. Look for the chat icon in the bottom-right corner. It has access to all of Supabase's documentation and can help with every Supabase-specific step in this guide.
Things it's good at:
- Walking you through where to click when you can't find something in the dashboard
- Fixing SQL errors if you paste in the error message
- Explaining terminal commands and what their output means
- Interpreting Edge Function logs when something isn't working
- Explaining Supabase concepts in plain English (what's a Secret key / service role key? what does Row Level Security do?)
It can't see your screen or run commands for you, but if you paste what you're seeing, it can tell you what to do next.
You'll set up your database, connect it to an AI gateway for embeddings, then deploy an MCP server that gives any AI assistant the ability to search and write to your brain. One progressive setup: database → AI gateway → MCP server → connect your AI.
This guide builds the system. The companion prompt pack — Open Brain: Companion Prompts — makes it useful. It includes prompts for migrating your existing AI memories into the brain, migrating an existing second brain system, discovering use cases specific to your workflow, capture templates that optimize metadata extraction, and a weekly review ritual. Finish the setup first, then grab the prompts.
| Service | Cost |
|---|---|
| Supabase (free tier) | $0 |
| Embeddings (text-embedding-3-small) | ~$0.02 / million tokens |
| Metadata extraction (gpt-4o-mini) | ~$0.15 / million input tokens |
For 20 thoughts/day: roughly $0.10–0.30/month in API costs.
You're going to generate API keys, passwords, and IDs across three different services. You'll need them at specific steps later — sometimes minutes after you create them, sometimes much later. Don't trust your memory.
Copy the block below into a text editor (Notes, TextEdit, Notepad) and fill it in as you go. Each item tells you which step generates it.
OPEN BRAIN -- CREDENTIAL TRACKER
Keep this file. Fill in as you go.
--------------------------------------
SUPABASE
Account email: ____________
Account password: ____________
Database password: ____________ <- Step 1
Project name: ____________
Project ref: ____________ <- Step 1
Project URL: ____________ <- Step 3
Secret key: ____________ <- Step 3 (formerly "Service role key")
OPENROUTER
Account email: ____________
Account password: ____________
API key: ____________ <- Step 4
GENERATED DURING SETUP
MCP Access Key: ____________ <- Step 5
MCP Server URL: ____________ <- Step 6
MCP Connection URL: ____________ <- Step 6 (server URL + ?key=your-access-key)
--------------------------------------
Seriously — copy that now. You'll need these credentials later.
Supabase is your database. It stores your thoughts as raw text, vector embeddings, and structured metadata. It also gives you a REST API automatically.
- Go to supabase.com and sign up (GitHub login is fastest)
- Click New Project in the dashboard
- Pick your organization (default is fine)
- Set Project name:
open-brain(or whatever you want) - Generate a strong Database password — paste into credential tracker NOW
- Pick the Region closest to you
- Click Create new project and wait 1–2 minutes
Grab your Project ref — it's the random string in your dashboard URL:
supabase.com/dashboard/project/THIS_PART. Paste it into the tracker.
Three SQL commands, pasted one at a time. This creates your storage table, your search function, and your security policy.
In the left sidebar: Database → Extensions → search for "vector" → flip pgvector ON.
In the left sidebar: SQL Editor → New query → paste and Run:
-- Create the thoughts table
create table thoughts (
id uuid default gen_random_uuid() primary key,
content text not null,
embedding vector(1536),
metadata jsonb default '{}'::jsonb,
created_at timestamptz default now(),
updated_at timestamptz default now()
);
-- Index for fast vector similarity search
create index on thoughts
using hnsw (embedding vector_cosine_ops);
-- Index for filtering by metadata fields
create index on thoughts using gin (metadata);
-- Index for date range queries
create index on thoughts (created_at desc);
-- Auto-update the updated_at timestamp
create or replace function update_updated_at()
returns trigger as $$
begin
new.updated_at = now();
return new;
end;
$$ language plpgsql;
create trigger thoughts_updated_at
before update on thoughts
for each row
execute function update_updated_at();New query → paste and Run:
-- Semantic search function
create or replace function match_thoughts(
query_embedding vector(1536),
match_threshold float default 0.7,
match_count int default 10,
filter jsonb default '{}'::jsonb
)
returns table (
id uuid,
content text,
metadata jsonb,
similarity float,
created_at timestamptz
)
language plpgsql
as $$
begin
return query
select
t.id,
t.content,
t.metadata,
1 - (t.embedding <=> query_embedding) as similarity,
t.created_at
from thoughts t
where 1 - (t.embedding <=> query_embedding) > match_threshold
and (filter = '{}'::jsonb or t.metadata @> filter)
order by t.embedding <=> query_embedding
limit match_count;
end;
$$;One more new query:
-- Enable Row Level Security
alter table thoughts enable row level security;
-- Service role full access only
create policy "Service role full access"
on thoughts
for all
using (auth.role() = 'service_role');Table Editor should show the thoughts table with columns: id, content, embedding, metadata, created_at, updated_at. Database → Functions should show match_thoughts.
In the left sidebar: Settings (gear icon) → API. Copy these into your credential tracker:
- Project URL — Listed in the API settings section under "Project URL"
- Secret key — Under "API keys," this is the key formerly labeled "Service role key." Same key, new name — click reveal and copy it.
Treat the Secret key like a password. Anyone with it has full access to your data. You may also see a "Publishable key" listed — that's the anon key surfaced more prominently in the updated Supabase UI. You don't need it for this setup.
OpenRouter is a universal AI API gateway — one account gives you access to every major model. We're using it for embeddings and lightweight LLM metadata extraction.
Why OpenRouter instead of OpenAI directly? One account, one key, one billing relationship — and it future-proofs you for Claude, Gemini, or any other model later.
- Go to openrouter.ai and sign up
- Go to openrouter.ai/keys
- Click Create Key, name it
open-brain - Copy the key into your credential tracker immediately
- Add $5 in credits under Credits (lasts months)
Your MCP server will be a public URL. The Supabase project ref in that URL is random enough that nobody will stumble onto it, but let's close the gap entirely. You'll generate a simple access key that the server checks on every request. Takes 30 seconds.
New to the terminal? The "terminal" is the text-based command line on your computer. On Mac, open the app called Terminal (search for it in Spotlight). On Windows, open PowerShell. Everything below gets typed there, not in your browser.
In your terminal, generate a random key:
# Mac/Linux
openssl rand -hex 32
# Windows (PowerShell)
-join ((1..32) | ForEach-Object { '{0:x2}' -f (Get-Random -Maximum 256) })Copy the output — it'll look something like a3f8b2c1d4e5... (64 characters). Paste it into your credential tracker under MCP Access Key.
Set it as a Supabase secret:
supabase secrets set MCP_ACCESS_KEY=your-generated-key-hereOne Edge Function. Four MCP tools: semantic search, browse recent thoughts, stats, and capture. This gives any MCP-connected AI the ability to read and write to your brain.
Mac users: If you already have Homebrew installed (you'll know — it's the thing you install with
brew), use the first option. Windows users: use Scoop — Supabase recommends it over npm for Windows because it handles PATH and permissions cleanly. Linux or Mac without Homebrew: use npm.
# Mac with Homebrew
brew install supabase/tap/supabase
# Windows with Scoop (recommended)
# Install Scoop first if you don't have it:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
Invoke-RestMethod -Uri https://get.scoop.sh | Invoke-Expression
# Then install Supabase:
scoop bucket add supabase https://github.com/supabase/scoop-bucket.git
scoop install supabase
# Linux or Mac without Homebrew
npm install -g supabaseVerify it worked:
supabase --versionsupabase login
supabase link --project-ref YOUR_PROJECT_REFReplace YOUR_PROJECT_REF with the project ref from your credential tracker (Step 1).
supabase secrets set OPENROUTER_API_KEY=your-openrouter-key-hereSUPABASE_URL and SUPABASE_SERVICE_ROLE_KEY are automatically available inside Edge Functions — you don't need to set them.
supabase functions new open-brain-mcpCreate supabase/functions/open-brain-mcp/deno.json:
{
"imports": {
"@hono/mcp": "npm:@hono/mcp@0.1.1",
"@modelcontextprotocol/sdk": "npm:@modelcontextprotocol/sdk@1.24.3",
"hono": "npm:hono@4.9.2",
"zod": "npm:zod@4.1.13",
"@supabase/supabase-js": "npm:@supabase/supabase-js@2.47.10"
}
}Open supabase/functions/open-brain-mcp/index.ts and replace its entire contents with the MCP server code from the original guide.
supabase functions deploy open-brain-mcp --no-verify-jwtYour MCP server is now live at:
https://YOUR_PROJECT_REF.supabase.co/functions/v1/open-brain-mcp
Replace YOUR_PROJECT_REF with the project ref from your credential tracker (Step 1). Paste the full URL into your credential tracker as the MCP Server URL.
Now build your MCP Connection URL by adding your access key to the end:
https://YOUR_PROJECT_REF.supabase.co/functions/v1/open-brain-mcp?key=your-access-key-from-step-10
Paste this into your credential tracker as the MCP Connection URL. This is what you'll give to AI clients that support remote MCP — one URL, no extra config.
That's it. No npm install, no TypeScript build, no local server to keep running. It's deployed alongside your capture function and runs on Supabase's infrastructure.
You need your MCP Connection URL from the credential tracker — the one with ?key= at the end.
- Open Claude Desktop → Settings → Connectors
- Click Add custom connector
- Name:
Open Brain - Remote MCP server URL: paste your MCP Connection URL (the one ending in
?key=your-access-key) - Click Add
That's it. Start a new conversation, and Claude will have access to your Open Brain tools. You can enable or disable it per conversation via the "+" button → Connectors.
No JSON config files. No Node.js. No terminal. If you had trouble with earlier versions of this guide, this is the fix.
Requires a paid ChatGPT plan (Plus, Pro, Business, Enterprise, or Edu) and works on the web at chatgpt.com. Not available on mobile.
Enable Developer Mode (one-time setup):
- Go to chatgpt.com → click your profile icon → Settings
- Navigate to Apps & Connectors → Advanced settings
- Toggle Developer mode ON
Enabling Developer Mode disables ChatGPT's built-in Memory feature. Yes, that's ironic for a brain tool. Your Open Brain replaces that functionality anyway — and it works across every AI, not just ChatGPT.
Add the connector:
- In Settings → Apps & Connectors, click Create
- Name:
Open Brain - Description:
Personal knowledge base with semantic search(or whatever you want — this is just for your reference) - MCP endpoint URL: paste your MCP Connection URL (the one ending in
?key=your-access-key) - Authentication: select No Authentication (your access key is embedded in the URL)
- Click Create
Using it: Start a new conversation and make sure the Open Brain connector is enabled — check the tools/apps panel at the top of the chat. ChatGPT is less intuitive than Claude at picking the right MCP tool automatically. If it doesn't use your brain on its own, be explicit: "Use the Open Brain search_thoughts tool to find my notes about project planning." After it gets the pattern once or twice in a conversation, it usually picks up the habit.
claude mcp add --transport http open-brain \
https://YOUR_PROJECT_REF.supabase.co/functions/v1/open-brain-mcp \
--header "x-brain-key: your-access-key-from-step-10"Every MCP client handles remote servers slightly differently. The server accepts your access key two ways — pick whichever your client supports:
Option A: URL with key (easiest). If your client has a field for a remote MCP server URL, paste the full MCP Connection URL including ?key=your-access-key. This works for any client that supports remote MCP without requiring headers.
Option B: mcp-remote bridge. If your client only supports local stdio servers (configured via a JSON config file), use mcp-remote to bridge to the remote server. This requires Node.js installed.
{
"mcpServers": {
"open-brain": {
"command": "npx",
"args": [
"mcp-remote",
"https://YOUR_PROJECT_REF.supabase.co/functions/v1/open-brain-mcp",
"--header",
"x-brain-key:${BRAIN_KEY}"
],
"env": {
"BRAIN_KEY": "your-access-key-from-step-10"
}
}
}
}Note: no space after the colon in
x-brain-key:${BRAIN_KEY}. Some clients have a bug where spaces inside args get mangled.
Ask your AI naturally. It picks the right tool automatically:
| Prompt | Tool Used |
|---|---|
| "Save this: decided to move the launch to March 15 because of the QA blockers" | Capture thought |
| "Remember that Marcus wants to move to the platform team" | Capture thought |
| "What did I capture about career changes?" | Semantic search |
| "What did I capture this week?" | Browse recent |
| "How many thoughts do I have?" | Stats overview |
| "Find my notes about the API redesign" | Semantic search |
| "Show me my recent ideas" | Browse + filter |
| "Who do I mention most?" | Stats |
Start by capturing a test thought. In your connected AI, say:
Remember this: Sarah mentioned she's thinking about leaving her job to start a consulting business
Wait a few seconds. Your AI should confirm the capture and show you the extracted metadata (type, topics, people, action items). Then open Supabase dashboard → Table Editor → thoughts. You should see one row with your message, an embedding, and metadata.
Now try searching:
What did I capture about Sarah?
Your AI should retrieve the thought you just saved.
The capture tool works from any MCP-connected AI — Claude Desktop, ChatGPT, Claude Code, Cursor. Wherever you're working, you can save a thought without switching apps.
If the specific suggestions below don't solve your issue, remember: the Supabase AI assistant (chat icon, bottom-right of your dashboard) can help diagnose problems with anything Supabase-related. Paste the error message and tell it what step you're on.
Claude Desktop tools don't appear
Make sure you added the connector in Settings → Connectors (not by editing the JSON config file). Verify the connector is enabled for your conversation — click the "+" button at the bottom of the chat, then Connectors, and check that Open Brain is toggled on. If the connector was added but tools still don't show, try removing and re-adding it with the same URL.
ChatGPT doesn't use the Open Brain tools
First, confirm Developer Mode is enabled (Settings → Apps & Connectors → Advanced settings). Without it, ChatGPT only exposes limited MCP functionality that won't cover Open Brain's full toolset. Next, check that the connector is active for your current conversation — look for it in the tools/apps panel. If it's connected but ChatGPT ignores it, be direct: "Use the Open Brain search_thoughts tool to search for [topic]." ChatGPT often needs explicit tool references the first few times before it starts picking them up automatically.
Getting 401 errors
The access key doesn't match what's stored in Supabase secrets. Double-check that the ?key= value in your URL matches your MCP Access Key exactly. If you're using the header approach (Claude Code or mcp-remote), the header must be x-brain-key (lowercase, with the dash).
Search returns no results
Make sure you've captured at least one thought first (see Step 8). Try asking the AI to "search with threshold 0.3" for a wider net. If that still returns nothing, check the Edge Function logs in the Supabase dashboard for errors.
Tools work but responses are slow
First search on a cold function takes a few seconds — the Edge Function is waking up. Subsequent calls are faster. If it's consistently slow, check your Supabase project region — pick the one closest to you.
Capture tool saves but metadata is wrong
The metadata extraction is best-effort — the LLM is making its best guess with limited context. The embedding is what powers semantic search, and that works regardless of how the metadata gets classified. If you consistently want a specific classification, use the capture templates from the prompt kit to give the LLM clearer signals.
When you capture from any AI via MCP: your AI client sends the text to the capture_thought tool → the MCP server generates an embedding (1536-dimensional vector of meaning) AND extracts metadata via LLM in parallel → both get stored as a single row in Supabase → confirmation returned to your AI.
When you search your brain: your AI client sends the query to the MCP Edge Function → the function generates an embedding of your question → Supabase matches it against every stored thought by vector similarity → results come back ranked by meaning, not keywords.
The embedding is what makes retrieval powerful. "Sarah's thinking about leaving" and "What did I note about career changes?" match semantically even though they share zero keywords. The metadata is a bonus layer for structured filtering on top.
Because you're using OpenRouter, you can swap models by editing the model strings in the Edge Function code and redeploying. Browse available models at openrouter.ai/models. Just make sure embedding dimensions match (1536 for the current setup).
Your MCP server handles both reading and writing. But if you want a quick-capture channel outside your AI tools:
- Slack Capture — Type thoughts in a Slack channel, automatically embedded and stored
- More integrations in
/integrations
You just used two free services, some copy-pasted code, and a built-in AI assistant to build a personal knowledge system with semantic search, an open write protocol, and an open read protocol. No CS degree. No local servers. No monthly SaaS fee.
Here's the thing worth noticing: that Supabase AI assistant that helped you through the setup? It has access to all of Supabase's documentation, understands your project structure, and can help you build on top of what you've created. That's not a one-time trick for getting unstuck during setup. That's a permanent building partner.
Want to add a new capture source? Ask it how to create another Edge Function. Want to add a new field to your thoughts table? Ask it to help you write the SQL migration. Want to understand how to add authentication so you can share your brain with a teammate? It knows the docs better than you ever will.
You just built AI infrastructure using AI. That pattern doesn't stop here.
Got stuck or want to share what you've built? Join the Open Brain Discord — there's a #help channel for troubleshooting and a #show-and-tell channel for showing off.
Your Open Brain is live. Now make it work for you. The Companion Prompts cover the full lifecycle from here:
- Memory Migration — Pull everything your AI already knows about you into your brain so every tool starts with context instead of zero
- Second Brain Migration — Bring your existing notes from Notion, Obsidian, or any other system into your Open Brain without starting over
- Open Brain Spark — Personalized use case discovery based on your actual workflow, not generic examples
- Quick Capture Templates — Five patterns optimized for clean metadata extraction so your brain tags and retrieves accurately
- The Weekly Review — A Friday ritual that surfaces themes, forgotten action items, and connections you missed
Start with the Memory Migration. If you have an existing second brain, run the Second Brain Migration next. Then use the Spark to figure out what to capture going forward. The templates build the daily habit. The weekly review closes the loop.
Built by Nate B. Jones — companion to "Your Second Brain Is Closed. Your AI Can't Use It. Here's the Fix."