Memory Layer · Part of Cortex

Vibe Replay

Every AI coding session generates decisions, patterns, and hard-won insights. Vibe Replay captures them all and turns raw events into structured wisdom you can replay, search, and share.

"Not just logs — structured wisdom."

Get Started See How It Works
vibe-replay — zsh
# Your coding session just ended. What happened?
$ vibe-replay replay latest

Session: refactor-auth-module
Duration: 47m · 142 events · 8 files changed

Phases detected:
  EXPLORATION (12m)IMPLEMENTATION (28m)TESTING (7m)

Insights extracted:
  [DECISION] Chose 3-module split over monolith at event #34
  [LEARNING] JWT edge case: expired tokens need grace period
  [MISTAKE] Import cycle debugged for 8 events (events #67-#75)
  [PATTERN] Read-heavy session (62% exploration)

✓ HTML replay generated — opening in browser
The Flow

Capture. Store. Analyze. Render.

Four stages turn raw Claude Code events into structured, searchable, shareable wisdom. Every stage is designed to be crash-safe, non-blocking, and zero-config.

Capture
Claude Code hooks fire on every tool use and session stop. Events stream into an append-only log.
Store
JSONL for durability and streaming. SQLite index for fast queries and search.
Analyze
Phase detection, insight extraction, decision tracking, statistics computation.
Render
Interactive HTML replays, Markdown summaries, JSON exports. All self-contained.
Claude Code Session
    ↓
  [Hooks] PostToolUse, Stop
    ↓
  events.jsonl append-only log
    ↓
  [Analyzer] → Phase detection, insights, statistics
    ↓
  [Renderers] → HTML / Markdown / JSON
Session Phases

Auto-detected from your workflow

Vibe Replay watches which tools you use and what keywords appear, then automatically segments your session into meaningful phases. Brief interruptions are merged; tiny runs are absorbed into their neighbors.

🔍

Exploration

Read / Glob / Grep → EXPLORATION

Reading files, searching codebases, browsing structures. The discovery phase before writing code.

🔧

Implementation

Edit / Write → IMPLEMENTATION

Writing code, creating new files, making changes. The core building phase of any session.

🐛

Debugging

error / fail / bug keywords → DEBUGGING

Fixing errors, tracing issues, resolving failures. Detected by error-related keywords in tool output.

Testing

Bash + test / pytest / assert → TESTING

Running tests, checking assertions, validating behavior. Bash commands with test keywords trigger this.

Refactoring

refactor / restructure keywords

Restructuring existing code without changing behavior. Clean-up and architecture improvements.

Configuration

config / setup / install keywords

Setting up dependencies, configuring tools, deployment preparation. Infrastructure work.

📄

Documentation

README / doc / comment keywords

Writing READMEs, adding comments, creating documentation. Making the work understandable.

Insight Types

Wisdom extraction, automated

Not all events are equal. The analyzer identifies the moments that matter most — the decisions, the patterns, the mistakes — and surfaces them as structured insights.

Decision

Key Choices

Captured when the session transitions to implementation, or when the first code change hits a new file. These mark the "why" behind architectural decisions.

Trigger: phase transition to IMPLEMENTATION
Learning

Discovered Patterns

New knowledge gained during the session. API quirks, library behaviors, edge cases that weren't obvious before coding started.

Trigger: extracted from session context
Mistake

Debugging Detours

When an error appears at event N and the fix doesn't land until event M, with a gap greater than 5 events. These teach what not to repeat.

Trigger: error at event N, fix at M, gap > 5
Pattern

Recurring Behaviors

Read-heavy sessions (>60% exploration), implementation-heavy sessions (>50% coding). Helps you understand your own workflow tendencies.

Trigger: ratio thresholds across events
Turning Point

Session Pivots

Moments where the session direction changes dramatically — encountering unexpected errors, running tests that fail, switching strategies entirely.

Trigger: errors, failed tests, strategy changes
Hotspot

Heavily Modified Files

Files that were modified 4 or more times during a session. These are the focal points — the files where the real work concentrated.

Trigger: file modified ≥4 times
Multiple Exploration

Deep Dives

Two or more runs of 5+ consecutive read/search operations. Indicates deep codebase exploration, usually before a major decision or design change.

Trigger: ≥2 runs of 5+ consecutive reads
Interactive Replay

Your session, visualized

Self-contained HTML replays with dark/light themes, collapsible phases, expandable events, code diffs, and filtering. No external dependencies — just open the file and browse.

refactor-auth-module — replay.html
Duration
47m 23s
Events
142
Files Changed
8
Code Changes
24
Errors
3
Insights (9)
  • Chose 3-module architecture
  • JWT grace period design
  • Import cycle pivot
  • 8-event debugging detour
  • Read-heavy pattern (62%)
All
Code Changes
Errors
Tool Calls
Decisions
Exploration
38 events · 12m
Glob 09:15:02
Search for auth-related files: **/*auth*
Read 09:15:14
Read src/auth/middleware.py (248 lines)
Grep 09:16:31
Search for JWT token handling: pattern "jwt|token" in src/
Implementation
72 events · 28m
Edit 09:27:45 Key Decision
Split auth module into 3 files: validators.py, middleware.py, tokens.py
src/auth/validators.py (new)
+ class JWTValidator:
+ def validate(self, token: str) -> Claims:
+ """Validate JWT with grace period for expired tokens."""
Write 09:31:12
Create src/auth/tokens.py — Token refresh and rotation logic
Bash 09:42:08 Turning Point
ImportError: circular import between validators and middleware
- from .validators import JWTValidator # circular!
+ from .tokens import TokenManager # break cycle
Testing
32 events · 7m
Bash 09:55:44
pytest tests/auth/ -v — 24 passed, 0 failed
Dark / Light Toggle
Theme preference saved to localStorage. Respects system preference on first load.
Collapsible Phases
Click any phase header to expand/collapse its events. Navigate quickly through long sessions.
Code Diff Display
Green/red syntax-highlighted diffs inline with events. See exactly what changed and why.
Event Filtering
Filter by All, Code Changes, Errors, Tool Calls, or Decisions. Focus on what matters.
Sticky Header
Project name, date, duration, and key counts always visible as you scroll through events.
Self-Contained
Single HTML file, zero external dependencies. Share via email, Slack, or just open locally.
Storage Architecture

Dual storage. Best of both worlds.

JSONL for reliability and streaming. SQLite for speed and queries. Each serves a different need — together they cover every access pattern.

JSONL Event Log

events.jsonl

Append-only, one JSON object per line. The source of truth for every event that happened during a session.

  • Append-only: crash-safe, never corrupts existing data
  • Streaming-friendly: read line by line, no need to load the whole file
  • Human-readable: inspect with any text editor or jq
  • Portable: copy a session directory to share the complete record

SQLite Index

index.db

Fast metadata queries, session search, and cross-session aggregation. The query layer over your event logs.

  • Fast queries: list sessions by project, date, or keyword
  • Full-text search: find sessions by content or tool usage
  • Cross-session: aggregate learnings across all sessions
  • Lightweight: single file, no server, instant startup
~/.vibe-replay/
├── sessions/
│  ├── {session_id}/
│  │  ├── events.jsonl     # Raw events (append-only)
│  │  ├── metadata.json   # Session metadata
│  │  └── replay.json     # Processed analysis
│  └── ...
├── index.db              # SQLite index
├── capture-hook.py        # PostToolUse hook
├── stop-hook.py           # Stop hook
└── capture-errors.log     # Error log
Hook Integration

Zero-friction capture via Claude Code hooks

Vibe Replay plugs into Claude Code's hook system. Two hooks — PostToolUse and Stop — capture every meaningful event without interfering with your workflow.

PostToolUse

Fires after every tool call Claude makes. Captures the tool name, input, output, timing, and any file changes.

  • Captures: TOOL_CALL, CODE_CHANGE, ERROR
  • Monitors: Read, Write, Edit, Bash, Glob, Grep
  • Extracts: file paths, diff content, error messages
  • Timing: millisecond-precision timestamps
Stop

Fires when a session ends. Triggers analysis, generates the replay, and updates the SQLite index.

  • Triggers: SESSION_END event
  • Runs: Analyzer (phase detection + insights)
  • Generates: replay.json, updates index.db
  • Optional: auto-generate HTML replay

Design Principles

Event Types

TOOL_CALL
CODE_CHANGE
DECISION
ERROR
USER_MESSAGE
SESSION_START
SESSION_END
NOTIFICATION
Module Structure

Clean, composable Python

Eight focused modules. Pydantic for data models, Click for CLI, Jinja2 for rendering, Rich for terminal output.

models.py

Pydantic data models — Event, SessionMetadata, Insight, SessionReplay, Phase, and more.

capture.py

Event capture from Claude Code hooks. Parses tool calls, extracts diffs, timestamps events.

store.py

SessionStore — JSONL append, SQLite indexing, session listing, search, and metadata queries.

analyzer.py

Phase detection, insight extraction, statistics computation. The intelligence layer.

renderer.py

HTML, Markdown, and JSON rendering with Jinja2 templates. Self-contained output generation.

hooks.py

Claude Code hook installation and uninstallation. Manages settings.json integration.

cli.py

Click-based CLI — install, replay, export, analyze, sessions, wisdom, serve.

mcp_server.py

MCP server for querying sessions programmatically. Enables other agents to access your wisdom.

CLI Reference

Everything from the terminal

A complete Click-based CLI for managing hooks, browsing sessions, generating replays, and extracting wisdom.

Command Arguments Description
install Install PostToolUse and Stop hooks into Claude Code settings
uninstall Remove Vibe Replay hooks from Claude Code settings
status Check if hooks are installed and functioning correctly
sessions [-p PROJECT] [-n LIMIT] List captured sessions, optionally filtered by project name
show SESSION_ID Display a terminal summary of a session with Rich formatting
replay SESSION_ID Generate an interactive HTML replay and open it in the browser
export SESSION_ID [-f FORMAT] [-o OUTPUT] Export a session as HTML, Markdown, or JSON to a specified path
analyze SESSION_ID Run or re-run analysis on a session (phases, insights, stats)
wisdom [-n LIMIT] Aggregate and display cross-session learnings and patterns
serve [-p PORT] Start a local web server for browsing sessions interactively
MCP Server

Queryable by other agents

Vibe Replay exposes an MCP server so other agents in your Cortex stack can query session history, search for learnings, and access structured wisdom programmatically.

search_sessions

Search across all sessions by keyword. Matches project names, tool calls, file paths, and insight descriptions.

query: string, limit: int = 10
get_learnings

Retrieve aggregated insights across recent sessions. Returns decisions, patterns, mistakes, and learnings.

limit: int = 20
get_session_summary

Get the full analysis of a specific session including phases, insights, statistics, and timeline.

session_id: string
list_recent_sessions

List the most recent captured sessions with basic metadata: project, duration, event count, date.

limit: int = 10
7
Phase Types
7
Insight Types
8
Event Types
3
Output Formats
4
MCP Tools
Quick Start

Three commands to structured wisdom

Install, hook into Claude Code, code as usual, then replay. Your sessions are now permanently captured with zero extra effort.

1 — Install Python 3.11+
pip install vibe-replay
2 — Install Hooks
vibe-replay install

✓ PostToolUse hook installed
✓ Stop hook installed
✓ Storage directory created at ~/.vibe-replay/
✓ SQLite index initialized

# Verify everything is working
vibe-replay status
Hooks: installed · Storage: ready · Sessions: 0
3 — Code as Usual
# Just use Claude Code normally. Everything is captured automatically.
claude "refactor the auth module"

# Events stream into ~/.vibe-replay/sessions/{id}/events.jsonl
# When the session ends, analysis runs automatically
4 — Replay & Learn
# Generate and open an interactive HTML replay
vibe-replay replay latest
✓ Opening replay in browser — 4 decisions, 2 learnings

# Export as Markdown for documentation
vibe-replay export latest -f md -o session-notes.md

# See what you've learned across all sessions
vibe-replay wisdom

Cross-session patterns:
  [PATTERN] You tend to explore deeply before implementing (avg 35% read phase)
  [PATTERN] Debugging detours average 6.2 events before resolution
  [LEARNING] JWT tokens: always add grace period for clock skew
  [HOTSPOT] src/auth/middleware.py modified in 12 of last 20 sessions

# Start a web server for browsing all sessions
vibe-replay serve
✓ Serving at http://localhost:8420

Technical Specs

Runtime
Python ≥3.11
Dependencies
click, pydantic, jinja2, rich
Storage
JSONL + SQLite (local)
Output
HTML, Markdown, JSON
Hook Overhead
Non-blocking, <5ms per event
HTML Output
Self-contained, zero external deps