I was trying to integrate memento-mcp into OpenClaw. It didn't work. I went to bed frustrated. Then I had a dream where I was riding a bus to work with Jang Wonyoung. Woke up, and suddenly the idea was just... there.
AI agents never sleep. They never dream. → That's actually their biggest problem.
Dreaming is supposedly what happens when your consciousness accidentally wakes up while your brain is busy compressing and organizing memories during sleep. So what if we gave AI agents the same process?
- Instead of just dumping memories into an ever-growing database, what if we mimicked the way biological brains -- refined over millions of years of evolution -- actually handle memory? Maybe we could escape the endless token consumption that comes with memory systems, while creating a "memory" that's imperfect but human-like -- something people can actually relate to?
TL;DR: Dreams = brain's memory compression process (hypothesis) → Let's give AI bots something similar.
Dreamer gives your AI agent the ability to dream.
Built for OpenClaw, but works with any system that produces daily markdown files. -- "Probably."
With Claude's help, I referenced sleep neuroscience papers and modeled Dreamer after them. Every night, it goes through the same 3 phases your brain does.
During NREM sleep, the hippocampus replays the day's events and transfers important patterns to the neocortex. Dreamer does the same:
- Loads episode files generated by OpenClaw (
YYYY-MM-DD.mdandYYYY-MM-DD-slug.md) - Chunks text into semantic units
- Clusters similar chunks via embedding similarity
- LLM distills each cluster into key facts
- Deduplicates against existing memories
- Stores new semantic memories in LanceDB
Raw experience in, compressed knowledge out.
REM sleep is when the brain integrates new memories with existing ones -- resolving contradictions and strengthening connections. Dreamer's REM phase:
- Detects conflicts between new and existing memories (expected complexity: O(N*M)) ← I feel like there's room for optimization with some clever module in the middle, but this is about as far as I could get
- Classifies each conflict:
state_change/different_aspects/unrelated - State changes: merges into one memory with historical context ("model changed to Claude" + prev: "model was Gemini")
- Different aspects: consolidates into a comprehensive memory
- Applies importance decay -- memories not recalled gradually fade
- Soft-deletes memories that fall below the threshold
- Archives processed episodes
No more "I told you I changed that setting last week."
Every cycle produces a markdown report: what was created, what was merged, what was forgotten. A transparent audit trail of your agent's memory maintenance.
# 1. Install dependencies
pip install -r requirements.txt
# 2. Set up environment
cp .env.example .env
# Edit .env with your OpenAI API key
# 3. Initialize data directory + LanceDB table
python setup.py --example
# 4. Run
python dreamer.py --verboseThe setup script creates the directory structure, initializes the LanceDB memories table (1536-dim vectors), and optionally generates an example episode file.
Dreamer is designed to work with OpenClaw's memory system. Here's how the pieces fit together:
- OpenClaw Gateway running with the
memory-lancedbplugin enabled - LanceDB as the vector store for semantic memories
- Episode files generated by OpenClaw's
session-memoryhook
In your openclaw.json, enable the memory plugin:
{
"plugins": {
"slots": {
"memory": "memory-lancedb"
},
"entries": {
"memory-lancedb": {
"enabled": true,
"config": {
"embedding": {
"apiKey": "${OPENAI_API_KEY}",
"model": "text-embedding-3-small"
},
"autoCapture": true,
"autoRecall": true
}
}
}
},
"hooks": {
"internal": {
"enabled": true,
"entries": {
"session-memory": {
"enabled": true
}
}
}
}
}This configures:
- memory-lancedb: Stores semantic memories as 1536-dim vectors in LanceDB. The gateway reads/writes to the same LanceDB that Dreamer consolidates.
- session-memory: The gateway's internal hook that saves conversation to episode files (
YYYY-MM-DD-slug.md) when/newis issued.memoryFlushwritesYYYY-MM-DD.mdduring session compaction.
User <-> OpenClaw Gateway
│
├── autoCapture ──> LanceDB (semantic memories)
│ ↑
├── session-memory ──> episodes/YYYY-MM-DD-slug.md (on /new)
├── memoryFlush ────> episodes/YYYY-MM-DD.md (on compaction)
│ │
│ 02:00 session-flush (/new auto-send)
│ │
│ 03:00 Dreamer (NREM → REM → Dream Log)
│ │
└── autoRecall <──── LanceDB (consolidated)
- During conversation: Gateway auto-captures important facts to LanceDB and auto-recalls relevant memories
- Episode generation:
/newcommand triggers session-memory hook to create episode files. Context compaction triggers memoryFlush to create episode files. - Nightly (2 AM):
session-flushauto-sends/newto ensure the day's conversations are saved as episodes - Nightly (3 AM): Dreamer reads episodes, creates new semantic memories, resolves conflicts with existing ones, and prunes stale memories
- Next conversation: Gateway recalls consolidated memories from LanceDB
Dreamer works with any system that produces markdown episode files. Just write daily files to the episodes directory:
$DREAMER_HOME/episodes/2024-03-15.md
$DREAMER_HOME/episodes/2024-03-16.md
And point DREAMER_HOME to a directory with a LanceDB store. Run python setup.py to initialize the table.
All settings are in config.py and can be overridden via environment variables:
| Variable | Default | Description |
|---|---|---|
DREAMER_HOME |
~/.dreamer |
Root data directory |
DREAMER_EMBEDDING_PROVIDER |
openai |
openai, ollama, or sentence-transformers |
DREAMER_EMBEDDING_DIM |
1536 |
Must match your embedding model |
OPENAI_API_KEY |
(required if openai) | For OpenAI embeddings |
OLLAMA_BASE_URL |
http://localhost:11434 |
Ollama server URL |
OLLAMA_EMBEDDING_MODEL |
nomic-embed-text |
Ollama embedding model |
ST_MODEL_NAME |
all-MiniLM-L6-v2 |
Sentence-transformers model |
DREAMER_LLM_PROVIDER |
openai |
openai, ollama, or minimax |
OLLAMA_LLM_MODEL |
qwen2.5:3b |
Ollama LLM model for summarization |
MINIMAX_API_KEY |
(optional) | If using MiniMax LLM |
| Parameter | Default | Description |
|---|---|---|
CLUSTER_SIMILARITY |
0.75 | Threshold for grouping chunks |
DEDUP_SIMILARITY |
0.90 | Skip if existing memory is this similar |
CONTRADICTION_SIMILARITY |
0.70 | Conflict detection threshold |
IMPORTANCE_DECAY_RATE |
0.05 | Daily decay rate |
SOFT_DELETE_THRESHOLD |
0.15 | Below this = memory deleted |
MAX_EPISODES_PER_RUN |
7 | Max days processed per cycle |
MAX_NEW_MEMORIES |
20 | Cap on new memories per cycle |
$DREAMER_HOME/
episodes/ # input: daily markdown files (YYYY-MM-DD.md, YYYY-MM-DD-slug.md)
episodes/archive/ # processed episodes moved here
lancedb/ # LanceDB vector database (shared with gateway)
dream-log/ # output: nightly consolidation reports
memory-archive/ # backup: pre-merge memory snapshots
workspace/ # optional: reference docs for context linking
docs/ # auto-generated reference documents
skills/ # skill definitions (SKILL.md)
Even on days when conversations are too short to trigger memoryFlush, episodes shouldn't be lost. The session-flush script auto-sends /new at 2 AM daily, ensuring the day's conversations are saved as episode files before Dreamer runs at 3 AM.
# Enable the systemd timer (see examples/)
sudo systemctl enable --now session-flush.timerEpisodes are markdown files named YYYY-MM-DD.md or YYYY-MM-DD-slug.md. Content is free-form text representing the AI agent's daily experiences:
# Session Notes - 2024-03-15
## User asked about deployment
Discussed Docker setup. User prefers docker-compose over raw Docker commands.
Decided to use nginx as reverse proxy.
## API Integration
Connected to the payment API. Key endpoint: POST /v1/charges
Rate limit: 100 req/min. Auth via Bearer token.# Example: run daily at 3 AM
0 3 * * * cd /path/to/dreamer && python3 dreamer.py --verbose >> dream-log/cron.log 2>&1Or use the provided systemd timer (see examples/).
Episode Files (YYYY-MM-DD.md / YYYY-MM-DD-slug.md)
|
v
+---------+
| NREM | Chunk -> Embed -> Cluster -> Summarize -> Store
+----+----+
| created_ids
v
+---------+
| REM | Conflict Detection -> Merge/Consolidate -> Decay -> Prune
+----+----+
|
v
+---------+
|Dream Log| Generate report
+---------+
- Python 3.10+
- Embedding provider (one of):
- OpenAI API key (
text-embedding-3-small) - Ollama running locally (
nomic-embed-text) pip install sentence-transformers(all-MiniLM-L6-v2)
- OpenAI API key (
- LLM provider (one of):
- OpenAI API key (
gpt-4.1-nano) - Ollama running locally (
qwen2.5:3b, etc.) - MiniMax API key
- OpenAI API key (
MIT