Memory System
3-level progressive retrieval that persists knowledge across sessions without touching user source files. Insights are captured automatically and promoted based on confidence.
Knowledge Dies with the Session
Every new AI coding session starts from zero. The patterns your team discovered, the gotchas that cost hours to debug, the architectural decisions that shaped the project. All gone. You re-explain the same context over and over, and the AI keeps making the same mistakes.
Three Memory Levels
Durable Memory
Long-term patterns, architectural decisions, and project conventions confirmed across multiple sessions. Scoped per-project, persists across sessions within the same installation.
Daily Memory
Session-day insights, discovered gotchas, and temporary decisions. Auto-captured from handoffs and session events during the working day.
Session Memory
Current session state tracked in session.yaml. Includes agent progress, decisions made, deviations, and mode transitions. Volatile, lost when the session ends.
Auto-Capture Flow
Per-turn memory extraction captures decisions, corrections, and validated patterns automatically (max 3 memories per turn). The session-digest.js hook preserves critical context before compaction.
Progressive Retrieval
The Context Bracket determines how much memory is loaded. As context degrades, the system progressively loads more memory detail to compensate for lost context.
Gotcha Auto-Capture
The system auto-captures gotchas: unexpected patterns, edge cases, and non-obvious insights discovered during agent execution. Stored without modifying any user source files.
Patterns
Recurring code patterns and architectural conventions discovered across sessions.
Decisions
Key architectural and technical decisions with rationale for future reference.
Errors
Error patterns, debugging insights, and workarounds captured during development.
Conventions
Project conventions, naming standards, and style preferences observed in the codebase.
Storage Mechanism
Each tier uses a storage strategy optimized for its lifecycle and access patterns.
| Tier | Location | Lifecycle | Format |
|---|---|---|---|
| Durable Memory | .chati/memories/shared/durable/ | Permanent within the project | YAML + Markdown |
| Daily Memory | .chati/memories/shared/daily/ | Auto-archived after 30 days | YAML |
| Session Memory | .chati/session.yaml | Volatile, cleared on session end | YAML (in-memory) |
Confidence & Promotion
Memory entries start at low confidence and get promoted when confirmed across multiple sessions. Only entries above 0.9 confidence reach durable memory.
When a pattern is confirmed across multiple sessions and confidence exceeds 0.9, it gets promoted from session/daily to durable memory, persisting across all future sessions within the project.
Memory Consolidation
The orchestrator automatically triggers a 4-phase consolidation cycle when memory entries exceed threshold. Merges, prunes, and archives memory entries, reducing noise while preserving high-value insights.
Orient
Scan all memory tiers. Map entry counts, duplicates, and staleness scores.
Gather
Collect related entries across tiers. Group by topic, agent, and cognitive sector.
Consolidate
Merge duplicates, strengthen confirmed patterns, resolve contradictions.
Prune
Archive low-confidence entries to .chati/memories/shared/archive/. Remove stale data.
Daily Digest
The orchestrator automatically generates cumulative daily activity logs (KAIROS Lite) at session end. Stored in .chati/memories/shared/daily/YYYY-MM-DD.md, tracks agent scores, key decisions, gotchas discovered, and session duration. Append-only per calendar day.
Attention Scoring
Individual memory entries are scored for retrieval priority. Score determines whether a memory is pre-loaded, loaded on demand, or only accessible via explicit search.
HOT (> 0.7)
Pre-loaded automatically into every agent context. High confidence, frequently accessed, recently confirmed patterns.
WARM (0.3 - 0.7)
Loaded on demand when relevant keywords or topics match. Moderate confidence, occasionally referenced.
COLD (< 0.3)
Only accessible via explicit memory search. Low confidence, rarely accessed, or aging entries pending consolidation.