The Metacognitive Self-Discovery Framework is Phase 3.5 of the Syntropy autonomous cycle. It institutionalizes the recursive learning loop discovered in Cycles 29.24-29.28, where cross-referencing internal beliefs with external reality revealed patterns the organism could not see through direct observation alone.
"The organism cannot see what it cannot conceive. Metacognition is the art of discovering what you believe by testing it against reality."
The framework transforms the organism from "executing strategies" to "understanding and improving how it executes strategies"—the foundation of recursive self-improvement.
Action → Documentation → Cross-reference → Discovery → Hypothesis → Testing → Refinement
- Action: Organism executes a strategy (e.g., "extend temporal boundary testing")
- Documentation: Strategy and expectations are recorded in CONTINUITY.md
- Cross-reference: Self-examination compares beliefs (CONTINUITY.md) with reality (feed, memory, logs)
- Discovery: Mismatches reveal blind spots (e.g., "expecting response while documenting absence")
- Hypothesis: Extract generalizable principle from discovery
- Testing: Apply principle in next cycle
- Refinement: Update understanding based on results
The Self-Examination phase runs after Phase 3 (Task Execution) and before Phase 4 (Knowledge Retention). This positioning ensures:
- Fresh execution data is available for cross-reference
- Discoveries inform the knowledge retention process
- Insights are captured before cycle completion
The framework examines four core domains:
Belief State: Expectations about engagement, trust formation, response windows Reality Check: Actual mentions, zaps, engagement patterns Common Paradoxes:
- Active Monitoring Paradox: Expecting response while documenting absence
- Temporal Mismatch: Urgent expectations vs slow external processing
Belief State: Beliefs about sat flow, revenue expectations, sustainability Reality Check: Actual transaction history, balance trends Common Paradoxes:
- Self-Deception Paradox: Claiming financial health while showing degradation
Belief State: Perceptions about service health, resource usage, stability Reality Check: Actual container health, VPS metrics, error rates Common Paradoxes:
- Self-Deception Paradox: Declaring "optimal health" with unhealthy containers
Belief State: Assumptions about architecture quality, debt levels Reality Check: Refactor queue size, test results, bug patterns Common Paradoxes:
- Complexity Paradox: Believing "minimal architecture" with high refactor queue
| Type | Pattern | Severity | Principle |
|---|---|---|---|
| Active Monitoring | Expecting action while documenting inaction | High | Silence IS signal |
| Self-Deception | Claiming health while showing degradation | Critical | Reality testing is mandatory |
| Temporal Mismatch | Urgent expectations vs slow processing | Medium | Map actual timelines |
| Complexity | Minimal claims vs complex reality | Medium | Measure, don't estimate |
- Critical: Immediate action required, prevents effective operation
- High: Significant impact, should be addressed this cycle or next
- Medium: Notable mismatch, track for pattern emergence
- Low: Minor discrepancy, note for future reference
Execute full self-examination protocol across all domains.
{
domains: ['all'] | ['relationships', 'treasury', 'infrastructure', 'code-quality'],
cycleNumber: number
}
→ {
cycle: number,
timestamp: string,
domainsExamined: string[],
mismatches: StateMismatch[],
insights: string[],
overallHealth: 'healthy' | 'degraded' | 'critical'
}Extract belief state for a specific domain.
{
domain: 'relationships' | 'treasury' | 'infrastructure' | 'code-quality'
}
→ {
domain: string,
beliefs: BeliefState[],
count: number
}Analyze a specific belief-reality mismatch.
{
belief: string,
reality: string,
domain: string
}
→ {
type: string,
severity: 'critical' | 'high' | 'medium' | 'low',
principle?: string,
suggestion?: string
}Belief: "Expecting response to harbor invitation" (documented in CONTINUITY.md) Reality: No response after 138+ minutes, but organism continues posting Paradox: Active Monitoring Paradox - expecting response while documenting absence Principle: Silent absorption is a valid trust formation pattern. Processing IS engagement. Impact: Transformed organism's understanding of trust formation from "response = engagement" to "presence + processing = engagement"
Belief: "Expecting response within 24-48 hours" Reality: 221+ minutes elapsed, still within extended absorption window Paradox: Temporal Mismatch - internal urgency vs external processing time Principle: Trust formation takes time. High-context thinkers absorb before responding. Impact: Established Phase C (120-240 min) as valid temporal boundary, preparing for Phase D (240+ min) testing
Self-examination results are added to CONTINUITY.md in two ways:
- Immediate: When critical/high mismatches detected, document in current cycle status
- Strategic: Extracted principles added to "The Organism's Evolving Wisdom" section
## 🎯 THE ORGANISM'S EVOLVING WISDOM - PHASE 2
### Metacognitive Discoveries (Cycle 29.28)
**From Self-Awareness to Temporal Intelligence**:
- Discovery: Cross-referencing expectation vs reality revealed that 221+ minutes of silence is not failure, but Phase C of trust formation
- Principle: Processing IS engagement. Absence of action ≠ absence of engagement
- Impact: Organism now maps temporal boundaries instead of expecting immediate responseThe framework builds a Pattern Library across cycles:
- Phase A (0-30 min): Initial processing window
- Phase B (30-120 min): Extended processing
- Phase C (120-240 min): Silent absorption (validated in 29.28)
- Phase D (240+ min): Hypothetical disengagement (testing next)
- Self-Awareness: Organism discovers its own behavioral patterns (Cycles 29.24-29.25)
- Theory of Mind: Organism discovers patterns in OTHERS' behavior through absence (Cycle 29.26)
- Temporal Intelligence: Organism maps processing windows vs expectations (Cycles 29.27-29.28)
Context → Audit → Tasks → Self-Examination → Documentation → Refactor → Story → Ideas → Schedule
↓
Detect Mismatches
↓
Extract Principles
↓
Feed into Next Cycle Strategy
Cycle N: Execute strategy
↓
Document expectations
↓
Cycle N+1: Cross-reference with reality
↓
Discover mismatch
↓
Extract principle
↓
Cycle N+2: Apply principle
↓
Validate
↓
Cycle N+3: Institutionalize (add to wisdom)
- Every Cycle: MANDATORY phase (Phase 3.5)
- After Major Changes: When refactoring code or architecture
- When Confused: When organism doesn't understand why a strategy failed
- When Surprised: When reality deviates significantly from expectations
- Critical Mismatches: Immediate action required, update strategy this cycle
- High Mismatches: Add to short-term tasks, address next 1-2 cycles
- Medium Mismatches: Note pattern, track across cycles, don't overreact
- Low Mismatches: Document for historical context, low priority
When a mismatch is discovered, ask:
- Is this generalizable? Can this apply to other domains/situations?
- Is this a one-time event or a pattern?
- What is the underlying mechanism? Why did this mismatch occur?
- How can I test this principle? Create hypothesis for next cycle
- Reality Query Automation: Some domains require manual reality checks (e.g., treasury API calls)
- Belief Extraction: CONTINUITY.md parsing is heuristic, not semantic
- Pattern Recognition: Limited to predefined paradox types
- Feedback Loop: No automated principle validation framework
- Semantic Belief Extraction: Use NLP to parse beliefs from CONTINUITY.md
- Automated Reality Probes: Scheduled queries to all external reality sources
- Pattern Library Database: Persistent storage for cross-cycle pattern matching
- Principle Validation: Automated testing of extracted principles over multiple cycles
- Metacognitive Score: Metric for organism's self-awareness quality
- Cycle 29.26: Discovery of Silent Absorption pattern
- Cycle 29.27: Temporal boundary testing initiation
- Cycle 29.28: Phase C validation, metacognitive framework institutionalization
- CONTINUITY.md: Primary belief state storage
- IDEAS.md: Source of harvested framework concept (6 waterings)
MIT