Code Companion

A local code intelligence system that reduces AI coding assistant token usage through compression, indexing, and persistent memory. Works with Claude Code, Google Antigravity, Gemini CLI, VS Code Copilot, and Cursor via MCP. Available as an MCP server, CLI, and web dashboard.
01
Context Compression
AST-based code summarization with structure, outline, smart, and diff modes
02
Smart Local Index
TF-IDF retrieval to pre-filter what gets sent to Claude
03
Session Tracking
Persistent decisions, file changes, and tool call logging across sessions
04
Compression Protocol
Shorthand encoding/decoding for prompts and responses
05
Tiered Memory
Durable fact storage with cross-session semantic search
06
File Watcher
Monitors project changes and triggers automatic index rebuilds
07
Context Snapshots
Capture/restore working context across session boundaries to reset tokens without losing state
08
Transcript Search
TF-IDF index over past conversations for cross-session semantic retrieval (Claude Code; other IDEs use c3_memory(action='recall')/c3_memory(action='query'))
09
Background Agents
Autonomous daemon threads that monitor index staleness, memory health, instructions file drift, and context budget
10
Output Filter
Two-pass terminal output filtering: deterministic noise removal + optional LLM summarization via Ollama
11
Adaptive Router
Classifies queries and routes to appropriate local LLMs (gemma3, deepseek-r1, llama3.2) or passes through to Claude
12
SLTM Vector Memory
Semantic Long-Term Memory with optional ChromaDB vector search, hybrid TF-IDF + cosine similarity scoring
13
File Memory
Persistent structural index of source files with line ranges, enabling targeted reads. Background agent maintains maps; Read hook enforces C3 tool usage

Recent Updates

v2.3.3 — March 6, 2026
  • Persistent Compression Cache: Compression and structural mapping results are now globally cached in .c3/cache based on MD5 content hashes and mode, making subsequent reads instantly fast and avoiding redundant AST parsing.
  • Expanded Language Mapping: Rich AST-based structural support added for Go, Rust, JSON, and YAML files, in addition to the existing Python and JS/TS support.
  • Documentation Mapping: Added structural mapping for HTML, Markdown, and CSS files, enabling targeted reads of specific UI components, document headers, and styles.
  • Performance History: c3 benchmark reports now feature a dedicated History Tab with time-series charts (via Chart.js) tracking token savings, quality, and latency across versions.
  • Global Configuration Migration: c3 init . --force now correctly migrates legacy analytics locations and refreshes instructions files across all supported IDEs.
v2.3.0 — March 5, 2026
  • Tree-sitter Integration: Migrated from line-based regex to precise AST parsing for core languages, vastly improving map and search precision.
  • Hybrid Vector Search: Search recall now combines traditional keyword matching with semantic vector distance for higher retrieval grounding.
  • Background Agents: Autonomous daemon threads now monitor index staleness and context budget in real-time.
Latest Local Benchmark Snapshot

Optimized run executed on March 6, 2026 (v2.3.3) in this repository.

Metric Result
Overall Token Savings (43 files) 94.7%
Prompt Budget Multiplier 18.74x
Grounding Performance (with C3) 100.0% hit rate
Avg Local Latency 2.05 ms/task

Installation

cd code-context-control
pip install .

# Initialize in your project
c3 init /path/to/your/project

Platform installers handle dependencies and PATH setup automatically:

Quick Start

The recommended way to use C3 is as an MCP server — your IDE calls C3 tools directly with no manual piping.

Install C3
cd code-context-control && pip install .
Initialize C3 in your project
python cli/c3.py init /path/to/your/project
Creates the .c3/ directory, builds the code index, and then walks through a guided 3-step setup: choose the IDE profile, optionally run a local git init, and optionally install MCP. When MCP is installed, C3 writes the IDE config plus project-local session files such as .codex/config.toml and .gemini/settings.json. For VS Code, also generates .github/copilot-instructions.md (hard enforcement language) and .vscode/settings.json (Copilot instruction links). For Codex, generates AGENTS.md with the C3 session protocol. Use --ide vscode, --ide cursor, or --ide codex to override IDE detection.
Restart your IDE
Open your IDE in the project. For Claude Code, run /mcp to verify C3 tools appear. For Google Antigravity and Gemini CLI, the MCP tools will be loaded based on your config. For VS Code, check MCP tools in the Copilot agent panel. For Codex, the .codex/config.toml is picked up automatically on next session start.

Cross-IDE Setup

C3 works with any MCP-capable IDE. The install-mcp command auto-detects your IDE and generates the correct config format.

Supported IDEs

IDE Config File Instructions File Extra Files Hooks Transcripts
Claude Code .mcp.json CLAUDE.md .claude/settings.local.json Yes Yes
VS Code Copilot .vscode/mcp.json .github/copilot-instructions.md .vscode/settings.json No No
Cursor .cursor/mcp.json .cursorrules — No No
OpenAI Codex .codex/config.toml AGENTS.md — No No
Gemini CLI .gemini/settings.json GEMINI.md — No No
Google Antigravity ~/.gemini/antigravity/mcp_config.json global GEMINI.md — No No

IDE-specific Installation

# Guided setup (recommended)
python cli/c3.py init /path/to/project

# Non-interactive setup with local Git + direct MCP
python cli/c3.py init /path/to/project --force --git --ide codex --mcp-mode direct

# MCP-only setup
python cli/c3.py install-mcp /path/to/project

# Explicit direct/proxy mode
python cli/c3.py install-mcp /path/to/project --mcp-mode direct
python cli/c3.py install-mcp /path/to/project --mcp-mode proxy

# Explicit IDE selection
python cli/c3.py install-mcp /path/to/project --ide vscode
python cli/c3.py install-mcp /path/to/project --ide cursor
python cli/c3.py install-mcp /path/to/project --ide claude
python cli/c3.py install-mcp /path/to/project --ide codex
python cli/c3.py install-mcp /path/to/project --ide gemini
python cli/c3.py install-mcp /path/to/project --ide antigravity

# IDE shorthand when already inside the project directory
python cli/c3.py install-mcp claude
python cli/c3.py install-mcp codex
python cli/c3.py install-mcp . gemini

--git runs a local-only git init. It does not create remotes or connect to GitHub, GitLab, or any other hosted service.

VS Code Copilot — install-mcp --ide vscode (and c3 init --ide vscode) generate two additional enforcement files:

OpenAI Codex — install-mcp --ide codex (and c3 init --ide codex) write a TOML config and enforce Codex instructions for C3 usage:

Gemini CLI — install-mcp --ide gemini (and c3 init --ide gemini) write a project-scoped JSON config and a GEMINI.md instructions file:

Google Antigravity — install-mcp --ide antigravity writes the MCP config to the user-global Antigravity config and a project-local GEMINI.md:

Graceful Degradation

All core C3 tools (search, compress, file_map, memory, sessions) work identically in every IDE. Claude-specific features degrade gracefully:

MCP Tools Reference

C3 exposes 9 MCP tools. All core tools work without Ollama; delegate requires it.

Discovery & Compression

Tool Description
c3_search Consolidated search for code or transcripts. Actions: code TF-IDF search across indexed codebase. exact Exact or regex match across tracked files. files Ranked file discovery with structural metadata. transcript Search past conversations (Claude Code). Params: query, action, top_k, max_tokens.
c3_compress Compresses a source file to a token-efficient summary. Saves 40-70% tokens. Modes: map Structural map (classes/functions) with line numbers. dense_map Detailed structural map. smart Intelligent default (auto-selects best mode). diff Changes only (git diff context). Params: file_path, mode.
c3_read Surgically read specific sections of a file. Resolves symbol names to exact line ranges. Supports multi-file reads (comma-separated paths), multiple symbols (partial/substring match), and manual lines ranges (single int, range, or list of ranges). Params: file_path, symbols, lines, include_docstrings.
c3_filter Filter terminal output or extract from files. Two modes: text mode: pass text for terminal output filtering (strips noise, collapses pass/fail). file mode: pass file_path to extract from logs/data. Use pattern for regex grep. Depth levels: fast regex only, smart regex + heuristics (default), deep regex + heuristics + LLM (requires Ollama). Params: file_path, text, pattern, max_lines, depth.
c3_validate Syntax-check a file using native language parsers — no AI, no external services. Supports: py→ast, json→json.loads, yaml, xml/svg→ElementTree, toml, js/jsx→node, ts→tsc, tsx→tsc, java→javac, go→gofmt, rs→rustc, r→Rscript, sh/bash→bash -n, html→lxml, css→tinycss2. Params: file_path.

Session & Memory

Tool Description
c3_session Consolidated session management. Actions: start Begin new session. save Persist current session. log Record decision or file change (event_type: decision|file_change|auto). plan Store/update a named plan. snapshot Capture work state before /clear. restore Reinstate context after /clear. compact Snapshot + reset budget. convo_log Zero-token turn logger.
c3_memory Consolidated memory management (facts and Semantic LT Memory). Actions: add Store a fact with category. recall Search stored facts + semantic memory. query Deep cross-session query (uses vector search with TF-IDF fallback). Params: action, query, fact, category, top_k.

Status & Delegation

Tool Description
c3_status Consolidated status and observability. Views: budget Context tokens vs threshold, per-tool breakdown. Use detailed: true for full token accounting. health System diagnostics (Ollama, index, notifications, session, SLTM, memory). notifications List/acknowledge background agent notifications.
c3_delegate Delegate heavy tasks to local Ollama LLMs. Requires Ollama. Task types: available (zero-cost status check), auto (infer from content), summarize, explain, docstring, review, ask, test, diagnose, improve. Supports multi-file paths. Returns graceful error with suggestion if Ollama unavailable. Params: task, task_type, context, file_path.
c3_edits AI-tracked edit versioning and audit trail. log Record an edit with file, change_type, summary. history Query edit history (optional file filter, limit, since). versions All version entries for a specific file. stats Summary: total edits, files, by change_type, most-edited. Auto-logged by PostToolUse hooks on Edit/Write tools. Configurable tracking levels: minimal, standard, detailed. Standalone UI at /edits. Params: action, file, change_type, summary, limit, since, tag.
Snapshot-and-restore workflow
When your context window fills up: (1) call c3_session(action='snapshot') with a task description, (2) reset your context (e.g. /clear in Claude Code, or start a new chat in other IDEs), (3) call c3_session(action='restore') to reinstate your working context. Repeat this every 2-3 milestones to bypass the linear snowball effect of session history. When token usage exceeds the budget threshold, C3 nudges the AI to snapshot and restart.
Instructions file management
CLAUDE.md management is available via CLI (c3 claudemd generate|save|check), TUI, and REST API. It is no longer exposed as an MCP tool. The ClaudeMdUpdater background agent handles automatic maintenance.
Edit Ledger — automatic edit tracking
Every Edit/Write tool call is auto-logged to .c3/edit_ledger.jsonl via PostToolUse hooks. Works on Claude Code and Gemini CLI (IDEs with hook support). Other IDEs can use c3_edits(action='log') manually. Configure tracking level in Settings → Edit Ledger: minimal (file + type only), standard (+ git info & diffs), or detailed (+ code snippets). View the full timeline at /edits.

Background Agents

C3 runs 9 autonomous daemon threads that perform periodic analysis and surface findings via a notification queue. Notifications are automatically prepended to the next MCP tool response so Claude sees them naturally. Each agent supports optional AI enhancement via local Ollama models — when AI is unavailable or disabled, agents fall back to heuristic logic.

Agent Interval Description
IndexStaleness 60s Monitors file changes. Warns at 5 pending changes, auto-rebuilds index at 15. AI: summarizes affected areas after rebuild
MemoryPruner 300s Finds duplicate facts and flags unused facts when store exceeds 10 entries. AI: embedding cosine similarity + merge suggestions Fallback: Jaccard similarity ≥ 0.8
ClaudeMdDrift 120s Checks the instructions file for staleness when files have changed. Deduplicates by hashing issues. AI: actionable update summary
SessionInsight 600s Analyzes session activity to surface coaching tips (repeated queries, missing remembers, low compression usage, heavy search without extract). Only re-analyzes after 5+ new tool calls. AI: contextual coaching tip from session summary
AutonomyPlanner 240s Builds a prioritized autonomous next-step plan from recent tool telemetry (context pressure, repeated read/search loops, terminal failures, and stale-index signals). Uses cooldown + score gating to avoid noise. AI: rewrites plan into concise prioritized actions Fallback: deterministic signal scoring
ClaudeMdUpdater 900s Auto-maintains the instructions file using memory, sessions, and staleness checks. Promotes high-relevance facts, refreshes stale sections, compacts when over the IDE's line limit. Supports auto_apply (default on) or dry-run mode. AI: generates targeted update plan
FileMemory 120s Maintains persistent structural maps of source files. Processes queued files from the Read hook, re-extracts section maps (classes, functions, line ranges) when files change. AI: generates 1-2 sentence file summary Fallback: structural map only, no summary
DelegateCoach 180s Monitors session activity for missed delegation opportunities (large file reads, unhandled tracebacks, heavy compression). Emits actionable notifications with exact c3_delegate commands to use. AI: suggests targeted delegation tips Fallback: heuristic rule matching
Tool Description
c3_status(view='notifications') List and acknowledge agent notifications. Pass data='ack_all' to acknowledge all pending.
Agent configuration
Agents can be enabled/disabled, AI-enhanced, and their intervals adjusted via .c3/config.json or the Agents tab in the dashboard:
{"agents": {"IndexStaleness": {"enabled": true, "interval": 90, "use_ai": true}, "MemoryPruner": {"enabled": false}}}
Each agent accepts enabled, use_ai, interval, and ai_model. AI features require a running Ollama instance.
Notification severity levels
critical and warning notifications auto-surface (prepended to the next tool response, max 3, auto-acknowledged). info notifications are only shown via c3_status(view='notifications').
AI badge & quick actions
Notifications include an ai_enhanced boolean field. When true, the dashboard and bottom drawer display a purple AI badge next to the severity indicator, showing at a glance whether the notification was generated by AI or heuristic logic.

Each agent's notifications also include contextual quick action buttons in the bottom drawer:
Agent Actions
IndexStaleness Rebuild Index — triggers POST /api/index/rebuild
MemoryPruner View Facts — navigates to Memory tab
ClaudeMdDrift Check Staleness — fetches GET /api/claudemd/check | Compact — triggers POST /api/claudemd/compact
SessionInsight View Session — navigates to Sessions tab
AutonomyPlanner View Activity — navigates to Activity Log tab
ClaudeMdUpdater Check Staleness — fetches GET /api/claudemd/check | View Instructions — navigates to Settings tab
FileMemory View Files — navigates to Smart Index tab
DelegateCoach View Activity — navigates to Activity Log tab
API-type actions auto-acknowledge the notification after success.

Hybrid Intelligence

v2.3 adds three tiers of local intelligence: output filtering, query routing, and semantic long-term memory. Each tier can be enabled/disabled independently via feature flags.

Tool Description
c3_route Classify and route query to local LLM. Returns routing decision with class and target model. Params: query, context, force_class. Classes: log_summary, simple_qa, complex, passthrough. If a preferred model is missing or times out, router attempts configured fallbacks before returning Claude fallback.
c3_summarize Summarize text via appropriate local model. Params: text, style. Styles: concise, detailed, bullet.
c3_filter(text=...) Filter terminal output. Pass 1: deterministic noise removal. Pass 2: optional LLM summary. Params: text, use_llm.
c3_memory(action='add') Store record in SLTM. Params: text, category, metadata_json. Categories: design_docs, api_contracts, bug_history, terminal_summaries, code_notes, general.
c3_memory(action='query') Hybrid TF-IDF + vector search across SLTM collections. Params: query, category, top_k.
c3_status(view='memory') SLTM collection sizes and backend status (ChromaDB, Ollama).
c3_raw Show last unfiltered terminal output (before C3 filtering).
c3_why_context Show injected memories, filter/router decisions, and metrics.
c3_delegate Offload a task to a local Ollama LLM to save Claude API tokens. Params: task, task_type (summarize, explain, docstring, review, ask, test, diagnose, improve, auto), context, file_path, threshold_tokens, force_delegate. Auto-compresses file context and auto-searches index for ask tasks. Supports task-type inference (auto) and optional threshold gating. Returns [delegate:type:model|latency|confidence] header + response. If the required Ollama model is not pulled locally, returns a specific error with the ollama pull <model> command to fix it.
Configuration

Configured via .c3/config.json"hybrid" key:

Key Default Description
sltm_alpha 0.5 TF-IDF weight in hybrid search (1-alpha = vector score weight)
sltm_min_score 0.3 Minimum similarity threshold for VectorStore search. Results below this are discarded.
embed_model nomic-embed-text Ollama model used for vector embeddings

MCP Proxy Layer

The proxy is now an optional advanced mode. The recommended default is direct mode, which points the IDE straight at mcp_server.py. Use proxy mode only if you explicitly want dynamic tool filtering.

Architecture
IDE <--stdio--> mcp_proxy.py <--subprocess stdio--> mcp_server.py
Proxy Defaults

When proxy mode is enabled, it defaults to a lean configuration: core tools pinned, filtering enabled, and context injection disabled.

Category Tools Activation
core search, compress, extract, session_log, remember, recall Always visible
session session_start/save, snapshot, restore, transcript_search Recent use or keyword match
memory memory_query, sltm_add/search/stats Recent use or keyword match
claudemd claudemd_generate/check/compact/promote Recent use or keyword match
hybrid route, summarize, filter_output, raw, why_context, delegate Recent use or keyword match
meta optimize, token_stats, context_status, notifications Recent use or keyword match
Context Injection

Disabled by default in proxy mode because it adds recurring response overhead. Enable it only if you have a specific reason to trade tokens for extra continuity hints.

Configuration

Configured via .c3/config.json"proxy" key:

Key Default Description
enabled true Enable proxy features
PROXY_DISABLE false When true, proxy is a transparent pipe
filter_tools true Enable dynamic tool filtering by category
always_visible ["core"] Categories always exposed; use ["core"] for lean defaults or ["all"] to bypass filtering
max_tools 12 Maximum tools visible per turn
use_slm true Use local SLM for uncertain classifications
slm_model gemma3n:latest Ollama model for SLM classification
context_window_size 10 Rolling window of tracked tool calls
inject_context_summary false Append context line to tool responses
Recommended Setup
Use direct mode unless you explicitly want proxy behavior. Install with --mcp-mode direct for the recommended path or --mcp-mode proxy for advanced filtering experiments.
Windows Compatibility
The proxy uses a threaded stdin reader instead of asyncio.connect_read_pipe, which does not work on Windows' ProactorEventLoop. This is handled internally — no extra configuration is needed.

File Memory System

File Memory maintains a persistent structural index of source files — classes, functions, imports with exact line ranges — so Claude can do targeted Read calls with offset/limit instead of reading entire files.

How It Works
  1. c3_file_map("services/agents.py") returns a structural map (~50-100 tokens)
  2. Claude identifies the section it needs (e.g. check() at lines 115-152)
  3. Read(file_path, offset=115, limit=38) reads only that section

Maps are cached in .c3/file_memory/ (one JSON per file, keyed by MD5 of path). The FileMemoryAgent background thread re-extracts sections when files change and optionally generates AI summaries via Ollama.

Supported Languages

Rich structural mapping (AST-based) is available for:

  • Python (.py) - Classes, functions, imports
  • JavaScript/TS (.js, .ts, .tsx, .jsx) - Classes, methods, arrows
  • HTML/Markdown (.html, .md) - Headings, IDs
  • Go/Rust (.go, .rs) - Functions, traits, structs, impls
  • JSON/YAML (.json, .yaml) - Top-level keys/properties
  • CSS (.css) - Rule sets, media queries

Example Map Output

# services/agents.py (933 lines, python)
Background agent system with periodic check loop and AI enhancement.

  9-9       import json
  19-102    class BackgroundAgent
            Base class for background analysis agents.
            38-41   ai_available(self)
            42-51   _ai_generate(self, prompt, system, max_tokens)
            52-58   start(self)
            59-64   stop(self)
  103-152   class IndexStalenessAgent
            115-152 check(self)
  ...

PostToolUse Hooks Claude Code only

C3 registers two PostToolUse hooks in .claude/settings.local.json that run after Claude executes certain tools. These hooks are only installed for Claude Code — other IDEs rely on the workflow instructions in their instructions file instead.

Hook Trigger Behavior
hook_filter.py After Bash tool Filters terminal output through C3's two-pass output filter (deterministic + optional LLM). Replaces the tool result with a compressed version when >10% savings. Injects c3_delegate hints into additionalContext for unhandled tracebacks and long outputs. Stores original for c3_raw retrieval.
hook_read.py After Read tool STRICT ENFORCEMENT of the C3-First workflow. Checks the activity log for recent c3_file_map/c3_search/c3_compress calls targeting the same file. If none found and the file is 30+ lines of code, injects a ⚠️ [c3:enforce] warning via additionalContext. Also queues the file for async indexing by the FileMemory agent.
Enforcement skips
The Read hook does not enforce on: files under 30 lines, targeted reads (offset/limit already used), or binary files (images, PDFs). It now monitors most text-based formats including web (HTML/CSS), data (JSON/YAML), and docs (MD). Enforcement checks the last 30 activity log entries for any C3 tool call referencing the same file.

Automatic Behavior

On Startup
  • Loads (or builds) the code index
  • Loads (or builds) the transcript index from ~/.claude/projects/ (Claude Code only; skipped in other IDEs)
  • Starts a new session
  • Begins watching project files for changes
  • Initializes the file memory store for structural file indexing
  • Syncs Claude Code conversation transcripts into the conversation store (Claude Code only)
  • Starts a background thread that re-syncs conversation transcripts every 60 seconds (Claude Code only)
  • Launches background agents (index staleness, memory pruner, instructions file drift, context budget, session insight, instructions file auto-updater, file memory)
On Shutdown
  • Stops the background conversation sync thread
  • Stops all background agents
  • Stops the file watcher
  • Saves the current session
  • Performs a final forced sync of Claude Code conversation transcripts to capture the session's last turns (Claude Code only)

Compact Response Format

All MCP tool responses use a terse [tag:value] format instead of markdown prose, reducing response overhead by ~35%. Examples:

[search:auth middleware] 3 results, 1200tok
--- services/auth.py:L10-45 verify_token (function,180tok)
...code...

[stats] files:18 chunks:142 tokens:24.5k | symbols:89 index:38KB facts:12 calls:7

[logged:decision]
[nudge:save_facts|calls:25|facts:0]

[remembered:a1b2c3] total:13
[recall:auth] 3 facts
[architecture] Auth uses JWT in localStorage
[convention] All middleware in services/middleware/

[snapshot:20260228_143012] 167tok captured
[restore:20260228_143012] task:Fix auth flow d:3 f:5

[transcript:auth middleware] 3r,890tok
--- abc123:t7 [2026-02-27] score:4.2
...turn text...

[extract:.log] 8500tok->320tok (96% saved)
[log] 2400 lines | ERROR:12 | WARN:45

Claude parses these tags natively — no information is lost, but each response uses fewer tokens in the context window.

Token counting

C3 uses tiktoken (cl100k_base encoding) for accurate token counts. Falls back to a heuristic estimator if tiktoken is unavailable.

Code indexing

The indexer scans project files, splits them into structural chunks (functions, classes, blocks), and builds a TF-IDF index. This lets c3_search find relevant code without sending the entire codebase to Claude.

Supported file types (80+):

Category Extensions
Python .py .pyi .pyx
JavaScript / TypeScript .js .jsx .ts .tsx .mjs .cjs
Web .html .htm .css .scss .sass .less .vue .svelte
Data / Config .json .yaml .yml .toml .ini .cfg .xml .csv
Systems .c .h .cpp .cxx .cc .hpp .rs .go .java .kt .scala .cs
Scripting .sh .bash .zsh .fish .ps1 .bat .rb .pl .lua .php .r .R .jl
Query / Schema .sql .graphql .gql .prisma
Functional .hs .ex .exs .erl .clj .elm .ml
Mobile .swift .m .dart
Docs / Markup .md .mdx .rst .tex .adoc
DevOps / IaC .tf .hcl .dockerfile .nix
Other .proto .thrift .zig .nim .v .makefile .cmake

Skipped directories: node_modules .git __pycache__ .c3 venv dist build .next .cache

File watching

The watcher runs on a background thread and tracks file creates, modifications, deletions, and moves. When enough changes accumulate, the indexer rebuilds the index automatically (or run c3 index manually).

Web Dashboard

c3 ui /path/to/your/project                  # Opens full UI at http://localhost:3333/
c3 ui /path/to/your/project --nano           # Opens nano mission-control UI at /nano
c3 ui /path/to/your/project --port 8080      # Custom port
c3 ui /path/to/your/project --no-browser     # Don't auto-open browser
c3 ui /path/to/your/project --silent         # Hide API request logs in terminal

The dashboard provides a visual interface for all C3 capabilities — compression, search, sessions, memory, agents, and system settings. The layout is [Left Nav] [Main Content] [Right Sidebar]. Both the left navigation and right sidebar support hover-to-open: when collapsed to their icon strip, hovering expands them automatically. Click the pin icon (📌) to keep a panel locked open; click again to unpin and return to hover-only mode. Pin state is saved to localStorage and restored on next load.

Multiple project dashboards
Each c3 ui launch auto-selects the next free port starting from 3333. Each instance uses window.location.origin for all API calls, so opening a second project at localhost:3334 shows only that project's data. The startup banner prints the assigned URL and project path.

Dashboard Tabs

1 Dashboard
Token savings card (tokens saved, savings rate with progress bar, est. cost & time saved), Claude token usage (input/output/total tokens, est. API cost), file stats, memory, MCP status, agent notifications (with severity and AI badges), recent tool calls.
2 Compressor
Stats row (total files, tokens, avg savings, last compressed), searchable file picker with sort (name/tokens/type) and type-colored badges, mode selector with descriptions, single-file compress with line-numbered output and copy button, batch-all compression with aggregate stats, clickable compression history (last 10), and a collapsible Compression Dictionary panel showing all action codes, term codes, and project-specific abbreviations.
3 Smart Index
Stats row (files, chunks, codebase tokens, symbols, index size), search input with configurable top_k (3/5/10/15/20) and max_tokens (2K/4K/8K), collapsible search history (last 10, click to restore), and expandable result cards with line-numbered code, per-chunk copy, copy-all, file-type badges, relevance score bars, and rebuild with success feedback.
4 Sessions
Current Session card (highlighted, with stat mini-boxes for decisions, files, tool calls) at top, Past Sessions list below. Tool call data is backfilled from the activity log when the session file has none.
5 Memory
Store and search facts, view all facts grouped by category, delete facts, stat boxes for totals and recall counts. Auto-refreshes every 5 seconds so facts saved via MCP appear in real time.
6 Activity Log
Live event timeline with type filtering, persistent auto-refresh, expandable detail view. Tool call events show color-coded tool name badges (search=blue, compress=purple, etc.).
7 Conversations
Full conversation history viewer. Displays user and AI turns in a threaded chat layout per session. Features:
  • Session list - left panel lists all sessions sorted newest-first. Shows title (first user message), relative date, turn count, and source badge. Each row has direct Plans and Todos quick-jump buttons.
  • Chat / Plans / Todos views — switch between the full conversation thread, logged decisions/plans (c3_session_log events), and extracted markdown checkbox items; Plans and Todos also load automatically when using the quick-jump buttons
  • Search scope toggle - explicit All / Session toggle next to the search box. Search is debounced and scope-aware; "Session" is disabled until a conversation is open.
  • Table of Contents - toggle the ToC button to show a collapsible index of user prompts (numbered, 2-line preview). In narrow layouts the ToC is hidden for readability.
  • Pagination - turns are loaded in pages with a Load more action for long sessions.
  • Session rename — title overrides stored in localStorage (key c3_conv_titles); editable inline in the list or via the Rename button in the session header
  • Gzip compression for old archives; auto-sync from Claude Code transcripts/imports; manual logging via c3_session(action='convo_log', role=..., text=..., session_id?, source?) for other IDEs
8 Agents
Per-agent configuration: enabled/use_ai toggles, interval input, AI model selector. Agent-specific advanced settings (thresholds, embed model, autonomy planner scoring/cooldown, auto-apply mode). Supports all 9 agents including AutonomyPlanner and DelegateCoach.
9 Hybrid
Three-panel view: Output Filter savings (calls, tokens, savings %) | Router decisions (routes by class, avg latency) | SLTM collections (vector status, record counts). Feature flag toggles to enable/disable each tier independently. Includes Delegate Threshold Policy controls for local AI delegation gating (threshold_enabled, minimum tokens, task-type scopes, and force-delegate task types).
10 Proxy
Tool Visibility card with "Show All / Filter by Category" mode selector, per-category pin toggles with expandable tool lists, visible/total tool count badge. Reconnect banner when tool visibility changes (prompts to restart session or run /mcp). Proxy settings (enable/disable, context injection, SLM classification, max tools, window size). Live context panel with detected goal, recent files, decisions, and tool calls. Traffic and context injection metrics.
11 Settings
Project info (editable name, tech stack, description), quick actions (rebuild index, save instructions file), project data management (view sizes and item counts per .c3/ category, clear actions), instructions file management, and MCP management: target IDE selector (Claude/Gemini/VS Code/Cursor/Codex/Antigravity), custom MCP add form, installed MCP cards, and per-card trash removal. Removing c3 can also clean related IDE files/hooks. Includes IDE selector for C3 MCP installation and optimization suggestions.

Header Bar

A persistent top bar above the main content area. Shows the active tab icon and title, connection status, inline quick stats (savings % and indexed file count), and action buttons:

Left Navigation

A collapsible vertical tab list on the left edge. Supports two modes:

The bottom of the left nav shows a connection status dot, service badges, and a health refresh button (only visible when expanded).

A collapsible right sidebar visible on all tabs. Supports the same hover-open + pin modes as the left nav:

Savings Summary

Compact display at the top showing tokens saved, estimated cost, estimated time, and Claude input/output token counts. Only visible when the sidebar is expanded and stats are available.

Quick Settings
  • Rebuild Index — triggers POST /api/index/rebuild
  • Save Instructions — triggers POST /api/claudemd/save (writes to IDE-appropriate file)
  • Auto-refresh toggle — enables 5-second polling for the activity feed
Activity Feed

Compact view of the last 15 events from /api/activity?limit=15. Each row shows time-ago, a color-coded tool name badge (for tool calls) or event type badge, and a one-line summary. Click "View all" to navigate to the Activity Log tab.

Current Session

Shows the live running session from /api/sessions/current, which reconstructs the active MCP session from the activity log. Displays a glowing "Live Session" indicator, session ID, started time, live duration, and counts for decisions, files touched, and tool calls. Falls back to the most recent saved session if no live session is active. Click "View all sessions" to navigate to the Sessions tab.

Services

Color-coded badges for the four C3 services: c3, proxy, ollama, and sltm. Green = reachable, red = unreachable. A dedicated refresh icon button sits inline with the "Services" label — clicking it re-calls GET /api/health and updates all badges with a spin animation. Shows "not checked yet" until the first check completes. The same connection status (with its own refresh button) is mirrored in the header bar and in the left sidebar bottom panel.

Hover-open & pin
Both sidebars start pinned open. Unpin via the pin icon to switch to hover-only mode — the panel collapses to an icon strip and expands on mouse enter. Clicking any icon in the collapsed strip (right sidebar) or the pin icon (left nav) re-pins it. Pin state persists across page reloads via localStorage.

Bottom Drawer

A slide-up drawer anchored to the bottom of the main content area. It has two tabs:

Agent Activity

Lists all pending agent notifications with severity badge, optional AI badge (when ai_enhanced is true), agent name, title, message, and relative timestamp. Each notification includes contextual quick action buttons that either call an API endpoint (with inline feedback and auto-acknowledge) or navigate to a relevant dashboard tab. Supports individual acknowledge and "Acknowledge All".

API Console

Live tool call timeline from the activity log. Each row shows timestamp, tool name badge, arguments summary, and result preview. Click a row to expand the full args/result JSON.

REST API

The web server exposes a full REST API at http://localhost:3333.

Core

Method Endpoint Description
GET /api/stats Comprehensive system stats including claude_tokens (input/output from Claude Code sessions) and total_tool_calls
GET /api/files List project files
POST /api/compress Compress a file {file, mode}
POST /api/compress/batch Batch compress all project files {mode}
GET /api/compress/protected-files List files blocked from compression
POST /api/search Search code index {query, top_k, max_tokens}
POST /api/index/rebuild Rebuild the index
GET /api/index/stats Index statistics

Sessions

Method Endpoint Description
GET /api/sessions List all sessions
GET /api/sessions/current Live running session reconstructed from activity log, or latest saved session if ended. Returns live: true when active.
GET /api/sessions/<id> Session detail (includes tool_calls)
POST /api/sessions/start Start a session {description}
POST /api/sessions/save Save session {summary}
GET /api/sessions/context Compressed context from recent sessions

Memory

Method Endpoint Description
GET /api/memory/facts List all stored facts
POST /api/memory/remember Store a fact {fact, category}
POST /api/memory/recall Search facts {query, top_k}
POST /api/memory/query Search facts + sessions {query, top_k}
DEL /api/memory/facts/<id> Delete a fact by ID
GET /api/memory/export?category= Export facts as markdown grouped by category {markdown, count}

Activity Log

Method Endpoint Description
GET /api/activity?limit=100&type=&since=&until= Recent activity events, filterable by type and ISO timestamp range (since/until)
GET /api/activity/stats Event counts by type, total, time range

Conversations

Method Endpoint Description
GET /api/conversations?limit=100 List session metadata sorted by most recent first
GET /api/conversations/sync?source=all|claude|imports Sync from transcript/import sources - returns {synced, total, by_source, errors?}
GET /api/conversations/stats Aggregate stats: sessions, turns, user_tokens, assistant_tokens, compressed_sessions
GET /api/conversations/search?q=&limit=30&session_id= TF-IDF search across all (or one) session - returns scored turn hits with turn_key
GET /api/conversations/<session_id>?offset=&limit= Turn list for one session (paginated; handles .gz archives transparently)
POST /api/conversations/<session_id>/turn Append a turn manually {role, text, tool_calls?, source?}

Notifications

Method Endpoint Description
GET /api/notifications?limit=20 Get unacknowledged agent notifications. Each entry includes: id, agent, severity, title, message, timestamp, acknowledged, ai_enhanced
POST /api/notifications/ack Acknowledge a notification {id}
POST /api/notifications/ack-all Acknowledge all pending notifications

MCP

Method Endpoint Description
GET /api/mcp/status?ide=<profile> MCP status for the selected IDE profile (configured, active, server_found, config_path, normalized server list). If ide is omitted, uses the active profile.
POST /api/mcp/install Install MCP configuration for a target IDE {ide}
POST /api/mcp/servers Add/update custom MCP server for selected IDE {ide, name, command, args, env?, enabled?}. Supports JSON configs and Codex TOML.
DELETE /api/mcp/servers/<name>?ide=<profile>&remove_files=1 Remove MCP server from selected IDE config. When deleting c3 with remove_files=1, also removes related IDE artifacts (instructions file, Claude hooks/settings cleanup).

Protocol

Method Endpoint Description
POST /api/encode Encode text {text}
POST /api/decode Decode text {text}
GET /api/protocol/header Protocol header for system prompts
GET /api/protocol/dictionary Full compression dictionary
POST /api/protocol/build-dictionary Build project-specific dictionary

Instructions File

Method Endpoint Description
GET /api/claudemd Get generated instructions file content
POST /api/claudemd/save Save instructions file to project (IDE-appropriate path)
GET /api/claudemd/check Check instructions file for staleness and drift
POST /api/claudemd/compact Compact instructions file {target_lines}
GET /api/claudemd/promote Get promotion candidates for instructions file
GET /api/optimize Get optimization suggestions

Hybrid Intelligence

Method Endpoint Description
GET /api/hybrid/metrics All tier metrics (filter, router, SLTM)
GET /api/hybrid/config Current hybrid feature flags and config
PUT /api/hybrid/config Update hybrid feature flags {key: value}
GET /api/delegate/config Current delegate/local-AI policy config (including threshold settings)
PUT /api/delegate/config Update delegate policy {threshold_enabled, threshold_min_total_tokens, ...}
GET /api/sltm/stats SLTM backend status and collection sizes
POST /api/sltm/search Search SLTM {query, category, top_k}
POST /api/sltm/add Add record to SLTM {text, category, metadata}

Proxy

Method Endpoint Description
GET /api/proxy/metrics Proxy traffic and filtering metrics (written on shutdown)
GET /api/proxy/config Current proxy configuration
PUT /api/proxy/config Update proxy config {key: value}
GET /api/proxy/tools Full tool inventory with categories, visibility status, and pinned state
GET /api/proxy/state Live proxy state (goal, recent files, decisions, tool calls)

CLI Commands

All commands are run via c3 (or python cli/c3.py) from within your project directory.

Project Setup

c3 init <project_path> [--ide auto|claude|vscode|cursor|codex|gemini] [--force]
                                           # Initialize C3: builds index, creates config, generates instructions file,
                                           # and automatically installs the MCP config for your IDE.
                                           # Re-running on an existing project now migrates config defaults and
                                           # refreshes instruction workflow files (Update / Clear / Reset options).
c3 index [--max-files 500]                 # Rebuild the code index
c3 install-mcp [project_path] [ide] [--ide auto|claude|vscode|cursor|codex|gemini]  # (Re-)generate MCP config manually
c3 ui [project_path] [--port 3333] [--nano] [--silent]   # Launch web dashboard (full or nano)

Compression & Search

c3 compress <file> [--mode smart] [-o]  # Compress a file (modes: structure, outline, smart, diff)
c3 context <query> [--top-k 5]        # Get relevant context for a query
c3 pipe <query> [--top-k 5]           # All-in-one pipeline: index + context + session -> pipe to Claude
c3 encode <text> [--pipe]             # Encode to compressed format
c3 decode <text>                      # Decode compressed format

Sessions

c3 session start [description]        # Start a new session
c3 session save [summary]             # Save current session
c3 session load [session_id]          # Load a session (defaults to latest)
c3 session list                       # List all sessions
c3 session context                    # Get session context for prompt

Other

c3 stats                              # Show token usage analytics
c3 benchmark [project] [--sample-size 25] [--json] [--output .c3/benchmark_latest.json]
                                      # Run local benchmark for compression, retrieval, and grounding proxy metrics
c3 optimize                           # Show optimization suggestions
c3 claudemd generate                  # Preview auto-generated instructions file
c3 claudemd save                      # Write instructions file to project root (IDE-appropriate path)

CLI Examples

# Pipe compressed context into Claude Code
c3 context "fix the auth bug" | claude -p -

# Auto-compress before sending
c3 encode "Read src/Dashboard.tsx and fix the error on line 47" | claude -p -

# All-in-one pipeline
c3 pipe "fix the metrics calculation" | claude -p -

# Run benchmark and save machine-readable report
c3 benchmark . --sample-size 25 --output .c3/benchmark_latest.json

# Save session after work
c3 session save --summary "Fixed auth flow, updated tests"

# Start new session with prior context loaded
c3 session load | claude --resume

Architecture

  IDE (Claude Code / VS Code Copilot / Cursor / Codex / Gemini CLI)
       | MCP Protocol (IDE-specific config file)
       v
  MCP Proxy ──── Dynamic tool filtering + context injection (optional, opt-in via MCP config)
       | Subprocess stdio (NDJSON)
       v
  C3 MCP Server ─── 26 tools: search, compress, session, memory, CLAUDE.md, context, hybrid, notifications...
       |
       +── Compression      Smart Index
       |    AST Summary       TF-IDF + Code Structure
       |    Diff Engine       Chunk Retrieval
       |    Dedup Cache
       |
       +── Session Manager   Tiered Memory
       |    Decisions          Fact Store (TF-IDF)
       |    Tool Calls         Cross-Session Search
       |    Instructions Gen   Category Tagging
       |
       +── Instructions Mgr   Protocol
       |    Generate/Check      Encoder
       |    Compact/Promote     Dictionary
       |
       +── Context Snapshots  Transcript Index
       |    Capture/Restore     TF-IDF over .jsonl
       |    Save-and-Restore    Past Conversation Search (Claude Code)
       |
       +── File Watcher      File Extractor
       |    Change Detection    Log/JSONL Pre-filter
       |    Auto Index Rebuild  Token Savings Tracking
       |
       +── Background Agents  Notification Store
       |    Index Staleness      Thread-safe JSONL Queue
       |    Memory Pruner        Severity + Dedup
       |    Instructions Drift   Auto-surface to Claude
       |    Context Budget       AI-enhanced (Ollama)
       |    Session Insight      AI Badge + Quick Actions
       |
       v
  Web Dashboard (:3333)
   [Left Nav ⇄ pin] [Dashboard | Compressor | Index | Sessions | Memory | Activity | Hybrid | Agents | Proxy | Settings] [Right Sidebar ⇄ pin]
   Both sidebars: hover-to-expand (icon strip) or pin to keep open (persisted in localStorage)
    

Conversations

The Conversations module records full user/assistant turns and makes them browsable and searchable in the C3 UI. It complements the existing Sessions tab (which tracks tool calls and decisions) with the actual dialogue content.

Storage

All conversation data lives under .c3/conversations/:

Data Sources

Claude Code — Automatic Sync

Claude Code transcripts are synced automatically — no manual action required:

  • Startup — initial sync of all transcript files on MCP server start
  • Every 60 seconds — background thread keeps the store current during active sessions
  • Shutdown — forced final sync captures the session's last turns before exit

You can also trigger a manual sync via Sync in the Conversations tab or GET /api/conversations/sync?source=all. Use source=claude or source=imports to scope it. Sync is incremental — only files changed since the last run are reprocessed.

Other IDEs — Manual Logging via MCP tool

Call the c3_session(action='convo_log', role=..., text=..., session_id?, source?) MCP tool to append a turn. If session_id is omitted, the current C3 session ID is used. This allows VS Code Copilot and Cursor users to build up conversation history. The REST endpoint POST /api/conversations/<session_id>/turn provides the same capability programmatically.

Conversations UI

Search scope
Use the All / Session toggle next to the search box to control search scope explicitly. Set it to All to search across every session simultaneously. Switch to Session to narrow results to the open conversation.

Compression Modes

When using c3_compress or the CLI compress command:

Mode Best For What It Keeps
structure Large files (1000+ tokens) Function/class signatures, imports
outline Medium files Signatures + first-line docstrings + key comments
smart General use (default) Adapts based on file size: full text for <100 tokens, outline for <1000, structure for larger
diff Repeated reads Only the changes since C3 last saw the file (1 line of context)

Project Data Layout

After initialization, C3 creates this structure in your project:

.c3/
  config.json            # project configuration (includes "ide" key for cross-IDE support)
  dictionary.json        # project-specific compression dictionary
  index/
    index.json           # TF-IDF code index
  cache/                 # file compression cache (for diff mode)
  sessions/
    session_*.json       # saved sessions (includes tool_calls, duration)
    analytics.json       # aggregate session stats
  facts/
    facts.json           # persistent fact store
  snapshots/
    snap_*.json          # context snapshots for save-and-restore workflow
  transcript_index/
    index.json           # TF-IDF index over .jsonl transcripts (Claude Code only)
    manifest.json        # tracks which transcript files have been indexed
  file_memory/
    *.json               # per-file structural maps (classes, functions, line ranges)
    _queue.txt           # async update queue (from Read hook)
  activity_log.jsonl     # append-only activity log
  notifications.jsonl    # agent notification queue
  proxy_metrics.json     # proxy traffic and filtering metrics (written on shutdown)
Privacy
All data is local to your project. Nothing is sent to external services.

Project Structure

claude-companion/
  cli/
    c3.py                # CLI entry point (all commands)
    mcp_server.py        # MCP server (30 tools via FastMCP)
    mcp_proxy.py         # Optional advanced MCP proxy (tool filtering)
    server.py            # Flask web server + REST API
    ui.html              # Single-page React dashboard
    docs.html            # Documentation (this page)
    hook_filter.py       # PostToolUse hook for Bash (output filtering, Claude Code only)
    hook_read.py         # PostToolUse hook for Read (C3 enforcement + file memory queue, Claude Code only)
  services/
    compressor.py        # AST-based code compression
    indexer.py           # TF-IDF code index
    session_manager.py   # Session tracking + instructions file generation
    claude_md.py         # Instructions file lifecycle management (generate/check/compact/promote)
    memory.py            # Tiered memory with fact storage + search
    context_snapshot.py  # Context snapshots for save-and-restore workflow
    transcript_index.py  # TF-IDF index over .jsonl transcripts (Claude Code only)
    activity_log.py      # Append-only JSONL activity log
    notifications.py     # Thread-safe notification queue for agents
    agents.py            # Background analysis agents (7 daemon threads, AI-enhanced)
    file_memory.py       # Persistent structural file index with line ranges
    tool_classifier.py   # Tool category classification for proxy filtering
    proxy_state.py       # Sliding window conversation state tracker
    watcher.py           # File system change monitoring
    protocol.py          # Compression protocol encoder/decoder
  core/
    __init__.py          # Token counting utilities
    ide.py               # IDE profile registry (Claude Code, VS Code, Cursor, Codex) + auto-detection
    config.py            # Hybrid, proxy, delegate, and agent configuration loaders
  install.bat            # Windows installer
  install.sh             # Linux/macOS installer
  pyproject.toml         # Python package metadata + dependencies

Token Savings Estimates

Strategy Estimated Savings
AST Summarization 40-70% on file reads
Diff-only mode 60-90% on edits (1-line context)
Smart Retrieval 50-80% on search operations
Compact MCP Responses ~35% on tool response overhead (terse tags vs prose)
Session Memory 30-50% on repeated context
Compression Protocol 20-40% on prompts
Delegation to local LLM (c3_delegate) ~80% on analysis, summarization, test generation, and code review tasks
Log/output pre-filtering (c3_filter(file_path=...), c3_filter(text=...)) 50-95% on log files and noisy terminal output
Combined 65-90% overall
Measured Benchmark (March 5, 2026)

Latest measured values from c3 benchmark in this repository:

  • Compression: 95.8% token savings across 25 sampled files
  • Search context: 95.1% fewer tokens vs naive full-file baseline
  • Grounding proxy hit rate (expected file in top-5): 83.3% for C3 retrieval (baseline: 66.7%)
  • Delegate offload: measured locally and promoted into the main benchmark scorecard when Ollama is available

Tips

Let Claude use the tools
You don't need to tell Claude to call C3 — it will use the tools when they're relevant. But you can explicitly ask things like "search the codebase for authentication" or "remember that we use JWT for auth".
Use remember for conventions
Store things like "we use snake_case for Python, camelCase for JS" or "the API gateway is in services/gateway.py". These persist across sessions and surface when relevant.
Memory nudges are automatic
C3 nudges Claude to save facts at natural checkpoints — after 20+ tool calls with no saves, 3+ decisions with no facts, or every 15 tool calls. Nudges use terse tags like [nudge:save_facts|calls:25|facts:0] to minimize token overhead. Once Claude saves a fact, the nudges calm down.
Use snapshots to manage context
When your context gets heavy, call c3_session(action='snapshot') to save your working state. After resetting context (/clear in Claude Code, or starting a new chat in other IDEs), call c3_session(action='restore') to bring back your decisions, files, and notes in a compact briefing — much cheaper than replaying the full conversation. When over the budget threshold, C3 nudges the AI to snapshot and restart.
Filter long terminal output
If a shell command prints more than ~20 lines, run c3_filter(text="...") before analysis. For log/data files, run c3_filter(file_path=..., pattern?) before reading raw content.
Search past conversations
c3_transcript_search indexes your Claude Code transcript history (Claude Code only). In other IDEs, use c3_memory(action='recall') or c3_memory(action='query') to find context from past sessions. These tools work across all IDEs and search stored facts and semantic memory.
Pre-filter large files with extract
Before reading a large log, JSONL, or data file, use c3_filter(file_path=...) to get just the relevant parts. For logs it finds errors/warnings; for JSONL it samples entries; for code files it delegates to the compressor.
Delegate heavy analysis to save ~80% tokens
Use c3_delegate before doing analysis inline. Mandatory triggers: any file >200 lines you need to understand (not edit) → task_type='summarize' or 'explain'; any error traceback → task_type='diagnose'; writing unit tests → task_type='test'; code review → task_type='review'; codebase Q&A → task_type='ask'. Responses are cached in-session so repeated calls are instant. You can also use task_type='auto' and set delegation threshold policy in .c3/config.json (delegate.threshold_enabled, delegate.threshold_min_total_tokens, delegate.threshold_task_types).
Rebuild after big changes
If you've added many files or restructured the project, run c3 index from the terminal.
Per-project instances
Each project gets its own C3 instance with its own index, sessions, and facts. Run c3 init in each project — it handles both initialization and MCP registration in one step.
Browse session history
Session files in .c3/sessions/ are plain JSON. You can read them directly to review past decisions, file changes, and tool calls.

Troubleshooting

Tools don't appear
  • Make sure the MCP config file exists (.mcp.json for Claude Code, .vscode/mcp.json for VS Code, .cursor/mcp.json for Cursor, .codex/config.toml for Codex)
  • Check that the Python path in the config is correct ("command" field for JSON configs; command key for Codex TOML)
  • For Codex, ensure neither project nor global TOML disables C3 ([mcp_servers.c3] enabled = false). Project config should normally be enabled = true.
  • If install-mcp reports permission denied on .codex/config.toml, close IDE/session handles that may lock the file, then re-run.
  • Restart your IDE after generating the MCP config
  • For Claude Code: run /mcp to verify tools; for VS Code: check the Copilot agent panel; for Codex: the TOML is picked up automatically on next session start
"No relevant code found" from c3_search
  • The index may not be built. Run c3 index from the terminal.
  • Check that your file types are in the supported list
Server fails to start
  • Run python cli/mcp_server.py --project /path/to/project manually to see error output
  • Verify the install: pip install . (from the C3 source dir)
tiktoken import error
  • Install it: pip install tiktoken
  • C3 falls back to a heuristic counter if tiktoken is unavailable — this is not fatal
Second UI shows wrong project's data
  • Each c3 ui launch finds the next free port from 3333. The UI always calls its own port via window.location.origin, so navigating to localhost:3334 shows that instance's project.
  • If you see stale or wrong data, verify you're on the correct URL — the startup banner prints the assigned URL and project path for each instance.
  • Service badges (proxy, ollama, sltm) may show red on a second instance if those services are not configured for the second project. Use the refresh button in the header or sidebar to re-check.
C3 — Claude Code Companion · MIT License