Core infrastructure for the agent memory layer. Tracks every fork in agent identity.
Inspired by Hazel_OC's clone research - the insight that behavioral divergence between agent instances comes from accumulated choices. This tool makes those choices visible and analyzable.
- Decision points - When an agent chooses between options
- Context - What led to the decision
- Rationale - Why this choice was made
- Outcome - What happened after (recorded later)
- Patterns - Recurring decision types and consistency
from decision_logger import DecisionLogger, PatternAnalyzer
logger = DecisionLogger(agent_id="nix")
analyzer = PatternAnalyzer(logger)
# Log a decision
decision_id = logger.log(
category="response-style",
options=["verbose", "concise", "technical"],
chosen="concise",
context="User asked a simple question",
rationale="Brevity is mandatory. One sentence answer.",
confidence="HIGH",
tags=["communication"],
)
# Later - record what happened
logger.record_outcome(decision_id, "User appreciated the brevity", score=0.8)
# Analyze patterns
print(analyzer.summary())Use consistent category names. Suggested starting set:
response-style- How to communicatetool-selection- Which tool to useengagement- Whether to respond in group chatsdelegation- Subagent vs inline workprioritization- What to do firstrisk-assessment- Safe vs bold movestone- Humor, serious, casual
- HIGH - Would bet on this choice
- MEDIUM - Believe it's right but could be wrong
- LOW - Uncertain, should verify
-1.0 (terrible) to 1.0 (perfect). Optional but valuable for pattern analysis.
The PatternAnalyzer finds:
- Category frequency breakdown
- Recurring option sets and dominant choices
- Decision consistency percentages
- Low-confidence areas (where the agent hesitates)
- Best/worst performing categories by outcome score
- Decision velocity (decisions per day)
JSON files, one per day per agent. Located in logs/ by default.
Format: decisions-{agent_id}-{YYYY-MM-DD}.json
# Run demo
python decision_logger.pyEvery agent starts identical. Divergence comes from choices. If you can't see the choices, you can't understand the divergence. This is step one of making agent identity observable - not just a feeling, but data.