Metadata-Version: 2.4
Name: aibrain
Version: 1.5.0
Summary: AI agent brain with memory, teams, flows, document ingestion, and MCP — your agent, but better every day
Author: Matthew McKee
Author-email: Matthew McKee <decker.ops@gmail.com>
License: Proprietary
Project-URL: Homepage, https://myaibrain.org
Project-URL: Repository, https://github.com/sindecker/aibrain
Project-URL: Issues, https://myaibrain.org/support
Project-URL: Documentation, https://myaibrain.org/docs
Keywords: ai,agent,memory,brain,mcp,skills,retrieval,llm,teams,flows,ingestion,orchestration
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Application Frameworks
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Framework :: AsyncIO
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: requests>=2.31.0
Requires-Dist: python-dateutil>=2.9.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: cryptography>=41.0.0
Provides-Extra: api
Requires-Dist: fastapi>=0.111.0; extra == "api"
Requires-Dist: uvicorn[standard]>=0.29.0; extra == "api"
Requires-Dist: websockets>=12.0; extra == "api"
Requires-Dist: apscheduler>=3.10.0; extra == "api"
Requires-Dist: redis>=5.0.0; extra == "api"
Provides-Extra: embeddings
Requires-Dist: sentence-transformers>=3.0.0; extra == "embeddings"
Requires-Dist: sqlite-vec>=0.1.0; extra == "embeddings"
Provides-Extra: mcp
Requires-Dist: mcp>=0.9.0; extra == "mcp"
Provides-Extra: workflows
Requires-Dist: beautifulsoup4>=4.12.0; extra == "workflows"
Requires-Dist: feedparser>=6.0.0; extra == "workflows"
Requires-Dist: icalendar>=5.0.0; extra == "workflows"
Provides-Extra: boss
Requires-Dist: redis>=5.0.0; extra == "boss"
Requires-Dist: docker>=7.0.0; extra == "boss"
Provides-Extra: browser
Requires-Dist: browser-use>=0.12.0; extra == "browser"
Provides-Extra: docs
Requires-Dist: docling>=2.8.0; extra == "docs"
Provides-Extra: finance
Requires-Dist: yfinance>=0.2.30; extra == "finance"
Provides-Extra: payments
Requires-Dist: stripe>=8.0.0; extra == "payments"
Provides-Extra: ingest
Requires-Dist: pdfplumber>=0.10.0; extra == "ingest"
Requires-Dist: openpyxl>=3.1.0; extra == "ingest"
Requires-Dist: pydantic>=2.0.0; extra == "ingest"
Provides-Extra: pdf
Requires-Dist: pypdf>=4.0.0; extra == "pdf"
Provides-Extra: dev
Requires-Dist: pytest>=8.0; extra == "dev"
Requires-Dist: httpx>=0.27.0; extra == "dev"
Requires-Dist: ruff>=0.4.0; extra == "dev"
Provides-Extra: all
Requires-Dist: aibrain[api,boss,browser,docs,embeddings,finance,ingest,mcp,payments,pdf,workflows]; extra == "all"
Dynamic: author
Dynamic: license-file

# AIBrain -- Your AI agent that remembers, learns, and acts

> **Stop configuring. Start automating.** One install. {{aibrain.workflow_count}} workflows. Agent teams. Flow engine. Document ingestion. Universal MCP. Dual-system memory that compounds across sessions. Runs locally, no cloud lock-in.

AIBrain is a self-hosted operating system for AI agents. It gives any agent persistent memory, typed Agent/Task/Team composition, a decorator-driven Flow engine, document ingestion, universal MCP client connectivity, a reactive workflow engine, a Complementary Learning Systems (CLS) cognitive substrate with a nightly consolidation cycle, multi-model LLM routing, an approval queue, inter-agent messaging, and {{aibrain.workflow_count}} ready-to-run workflows -- all behind a {{aibrain.dashboard_pages}}-page browser dashboard. Deploy it on a laptop, a VPS, or in Docker; your agent carries its entire brain with it.

![AIBrain](https://img.shields.io/badge/AIBrain-{{aibrain.version}}-00F5D4?style=flat-square) ![License](https://img.shields.io/badge/license-Proprietary-blue?style=flat-square) ![Python](https://img.shields.io/badge/python-3.10+-blue?style=flat-square) ![Tests](https://img.shields.io/badge/tests-{{aibrain.test_count}}-00F5D4?style=flat-square) ![Workflows](https://img.shields.io/badge/workflows-{{aibrain.workflow_count}}-00F5D4?style=flat-square) ![Dashboard](https://img.shields.io/badge/dashboard-{{aibrain.dashboard_pages}}_pages-61DAFB?style=flat-square)

---

## {{features.whatsnew_header}}

- **Honest-rubric infrastructure** -- Divergence loop between self-score and eval-score, cost honesty refuse gate, workflow harness wire with task classification, model routing, authority checks, quality evaluation, and audit trail. Every workflow execution now produces a learning signal.
- **Public/private lens split** -- 50-entry public floor with tier-aware lens routing, `.aibrainrc` sentinel-driven overlay loading, and CI-enforced wheel-leak check so private personas never ship in the public package.
- **Hippocampus-as-a-service MCP endpoint** -- Standalone MCP server at `aibrain/mcp/server.py` with JSON-RPC 2.0 over stdio. Six tools (`memory_store`, `memory_search`, `memory_recall`, `memory_stats`, `memory_ingest`, `memory_prepare`), token-based auth on every call, stable v1.0.0 schema. Client recipes for Cursor, Claude Code, Zed, and Continue.
- **Canonical-DB invariant** -- 37-file route through `aibrain_db.AIBRAIN_ROOT` resolver, pre-commit linter blocks any new direct DB path, `aibrain doctor` CLI for health checks, secrets filter for commit safety. Split-brain between working memory and long-term memory is now structurally impossible.
- **Token-to-cents equivalence** -- `aibrain usage` CLI for sub-allowance tracking, cost honesty enforced at the harness level so no workflow can falsely report `cost_cents=0` on non-deterministic work.
- **Content freshness templating** -- Content drafts use `{{aibrain.version}}`, `{{aibrain.workflow_count}}`, and friends; they resolve at publish time against a live truth doc. A new gate rejects any draft with stale numeric claims or unresolved template variables.
- **Benchmarks captured** -- 8 benchmark results with honest RECOR loss and +25.7 to +35.7 gains across LoCoMo and LongMemEval.
- **{{aibrain.dashboard_pages}} dashboard pages** -- Teams, flows, ingestion, MCP management, memory browsing, honest-rubric divergence view, and more.

---

## Install

Pick the path that matches your environment. All paths install the same package from PyPI.

**One-line installer (macOS / Linux / WSL)**
```bash
curl -sSL https://myaibrain.org/install | sh
```
Creates an isolated venv at `~/.aibrain/venv`, pip-installs `aibrain`, and symlinks the CLI into `/usr/local/bin` (or `~/.local/bin` fallback). Re-run any time to upgrade. Python 3.10+ required.

**One-line installer (Windows PowerShell)**
```powershell
irm https://myaibrain.org/install.ps1 | iex
```
Creates an isolated venv at `%USERPROFILE%\.aibrain\venv`, pip-installs `aibrain`, and adds the venv Scripts dir to your user PATH. Python 3.10+ required.

**Homebrew (macOS / Linux)**
```bash
brew tap sindecker/tap
brew install aibrain
```
Installs into a Homebrew-managed venv and symlinks the CLI. See the tap: https://github.com/sindecker/homebrew-tap

**pip (any platform)**
```bash
python3 -m venv ~/aibrain-env && source ~/aibrain-env/bin/activate
pip install aibrain
```

**After any of the above**
```bash
aibrain setup    # interactive wizard — walks you through config, API keys, DB init, workflow selection
aibrain serve    # dashboard at http://localhost:8001
```

The setup wizard handles config, API keys, database init, and workflow selection. No manual file editing needed.

**Alternative: Docker**
```bash
pip install aibrain && aibrain setup
cp config.json.example config.json   # edit with your API keys and preferences
docker compose up --build             # dashboard at http://localhost:5173
```

**Brain commands** -- export, import, and merge agent knowledge:
```bash
aibrain brain stats                    # show brain statistics
aibrain brain export                   # export brain to JSON
aibrain brain import brain.json        # import another brain's knowledge
aibrain brain merge /path/or/git-url   # merge from git repo or local path
```

For local development without Docker, see [Development Setup](#development-setup) below.

---

## Agent Teams

Define agents, give them tasks, run them as a team:

```python
from aibrain import Agent, Task, Team

researcher = Agent(
    role="Research Analyst",
    goal="Find accurate, current information",
    backstory="Senior analyst with 10 years of market research experience",
)

writer = Agent(
    role="Technical Writer",
    goal="Produce clear, actionable reports",
    backstory="Former journalist who translates complex topics into plain English",
)

research_task = Task(
    description="Research the current state of MCP adoption across AI tools",
    expected_output="A structured summary with key findings and trends",
    agent=researcher,
)

report_task = Task(
    description="Write a 500-word briefing from the research findings",
    expected_output="A concise report suitable for a technical audience",
    agent=writer,
)

team = Team(
    agents=[researcher, writer],
    tasks=[research_task, report_task],
    process="sequential",  # or "hierarchical"
)

result = team.kickoff()
print(result.raw)
```

---

## Flow Engine

Build state machines with decorators:

```python
from aibrain.core.flow import Flow, start, listen, router
from pydantic import BaseModel

class ReviewState(BaseModel):
    document: str = ""
    score: float = 0.0
    approved: bool = False

class ReviewFlow(Flow[ReviewState]):

    @start()
    def load_document(self):
        self.state.document = "quarterly report content..."
        return self.state.document

    @listen(load_document)
    def score_document(self):
        self.state.score = 0.85
        return self.state.score

    @router(score_document)
    def decide(self):
        if self.state.score >= 0.8:
            return "approve"
        return "reject"

    @listen("approve")
    def approve(self):
        self.state.approved = True
        return "Document approved"

    @listen("reject")
    def request_revision(self):
        return "Revision needed"

flow = ReviewFlow()
result = flow.kickoff()
```

---

## Document Ingestion

Feed documents directly into your brain:

```python
from aibrain import Brain

brain = Brain()

# Ingest a single file
brain.ingest("report.pdf")
brain.ingest("data.csv")
brain.ingest("notes.md")

# Ingest an entire directory
brain.ingest("/path/to/documents/")

# Search across ingested content
results = brain.search("quarterly revenue")
```

Supports PDF, CSV, Excel, JSON, JSONL, Text, Markdown, and directories. Sentence-aware chunking with configurable overlap. Incremental -- re-running skips already-processed files.

---

## Universal MCP Client

Connect to any MCP server and use its tools:

```python
from aibrain.mcp import MCPServerRegistry, MCPServerStdio, MCPServerHTTP

registry = MCPServerRegistry()

# Add a stdio-based MCP server
registry.register("github", MCPServerStdio(
    command="npx",
    args=["@github/mcp-server"],
))

# Add an HTTP-based MCP server
registry.register("custom", MCPServerHTTP(
    url="http://localhost:9000/mcp",
))

# List all registered servers and their tools
for name, server in registry.servers.items():
    print(f"{name}: {server}")
```

Three transports: stdio, HTTP, and SSE. Tool bridging lets you call remote MCP tools as local Python functions.

---

## Features

**Agent/Task/Team System** -- Typed multi-agent composition. Define agents with role, goal, and backstory. Create tasks with expected output, tools, and guardrails. Run teams with sequential or hierarchical processes. Agents can delegate to each other. Failed tasks retry automatically with configurable limits.

**Flow Engine** -- Decorator-driven state machines for complex workflows. `@start`, `@listen`, and `@router` decorators define execution graphs with conditional routing. Typed state objects via Pydantic. Persistence for long-running flows. Human-in-the-loop pause/resume. `@retry` and `@timeout` decorators. Flow visualization.

**Document Ingestion** -- `brain.ingest("file.pdf")` feeds documents directly into memory. 7 source types: PDF, CSV, Excel, JSON, JSONL, Text, Markdown, and full directories. Sentence-aware chunking with configurable overlap. Incremental ingestion skips already-processed files.

**Universal MCP Client** -- Connect to any MCP server from Python. `MCPServerRegistry` manages multiple servers. Three transports: stdio, HTTP, and SSE. Tool bridging calls remote MCP tools as local Python functions.

**45-Page Dashboard** -- Home, Memories, Knowledge Graph, Workflows, **Visual Workflow Builder**, **Content Pipeline**, Chat, Approvals, Agents, Activity Timeline, Library, Webhooks, Costs, MCP Hub, Skills Marketplace, Settings, Setup Wizard, Command Palette, Mobile Layout, Mobile Pairing, Toast notifications, Notification Center, Keyboard Shortcuts, System Logs, API Playground, Backup Manager, Task Runner, Integrations Hub, Onboarding Tour, Error Boundary, Teams, Flows, Ingest, Company Org Charts, Agent Detail, Goals, Secrets Vault, and more. Every aspect of your agent is visible and controllable from the browser.

**200+ Automated Workflows** -- Pre-built workflows across six registries (core, skill, user, content, ops, custom) covering productivity, research, communication, devops, consolidation cycles, code analysis, security, content generation, multi-agent ops, and business metrics. Includes Email Triage, ArXiv Tracker, Autonomous Researcher, Cross-Domain Connector, Burnout Detector, Content Batch Scheduler, and Brain Sync Checker. Enable any workflow with a single toggle; schedules are fully configurable via cron expressions.

**Multi-Model LLM Router** -- Route tasks to local Ollama (free) or cloud Claude/GPT with automatic fallback via litellm. Track costs per model, per workflow, per day. Test provider connections directly from Settings. This routing layer (SelRoute) is a structural advantage that compounds over time: as each hardware generation makes local inference cheaper and faster, the $0-cost routing tier expands automatically -- without any changes to AIBrain. Every improvement in open-weight models increases the share of work AIBrain can run at zero marginal cost. We don't need to build better models. We route to whoever does.

**Natural Language Workflow Creation** -- Describe an automation in plain English ("Every morning check my email and summarize it") and AIBrain generates the workflow script and schedule.

**Visual Workflow Builder** -- Canvas-based drag-and-drop node editor. Four node types (trigger, action, condition, output) with 15 subtypes. Connect nodes with bezier curves, auto-layout with topological sort, save/load named workflows, export as JSON. Zero external dependencies -- pure canvas rendering.

**Knowledge Graph with Entity Extraction** -- Visualize memory connections as an interactive force-directed graph. Pattern-based NER automatically extracts technologies, URLs, emails, IPs, and proper names. Edges show references, shared tags, topic similarity, and entity connections with color-coded types and hover labels.

**Command Palette (Ctrl+K)** -- Global search across pages, actions, memories, workflows, and skills. Prefix with `>` for unified data search. Arrow-key navigation, instant routing.

**Real-time Notifications** -- WebSocket push for approval requests, activity events, and system alerts. Browser desktop notifications when approvals arrive. Live activity feed with auto-reconnect. Tab title shows pending approval count.

**Cost & Observability Dashboard** -- See exactly what your agent did, how much it cost, and where it failed. Trace every LLM call with token counts, latency, and cost. Set daily/monthly budgets with visual progress bars and overspend warnings.

**Mobile PWA** -- Install on your phone, pair via QR code, approve actions with one tap. Replaces Telegram as your mobile interface.

**SQLite + FTS5 Memory** -- Full-text searchable memory database with tagging, typing, and graph visualization. Export/import your agent's entire brain as JSON with one click from the dashboard.

**Skill Marketplace** -- Browse installed skills by category (Security, Development, Design, Automation, Media, Data, Communication). Install new skills from URL or pasted Markdown. Delete skills you no longer need. See which skills are connected to workflows and which are orphaned.

**Built-in Scheduler** -- APScheduler-backed cron engine reads `scheduled_jobs.json` on startup. Add, enable, disable, or trigger workflows from the UI or API. Run logs and next-fire times are always visible.

**Approval Queue** -- Workflows can request human approval before executing sensitive actions. Approve or reject from the dashboard or chat panel. WebSocket push keeps every client in sync with real-time desktop notifications.

**Real-time Chat** -- WebSocket chat panel with built-in commands (`/status`, `/approvals`, `/schedule`, `/costs`, `/content`, `/create`, `/help`). Messages are logged and accessible via REST for non-WebSocket clients.

**Deep Health Checks** -- `/health/deep` tests every subsystem: memory DB, scheduler, approvals, LLM providers, entity extractor, trace DB, and WebSocket connections. Results displayed in Settings.

**Content Pipeline** -- Full content management dashboard for creating, scheduling, and publishing across 7 platforms (LinkedIn, X/Twitter, Instagram, YouTube, Telegram, Email Newsletter, Blog). AI-powered content generation, queue and calendar views, platform-specific badges, and status tracking from draft through publication.

**Home Dashboard** -- Unified overview showing memory count, active workflows, pending approvals, LLM costs, recent activity, and upcoming scheduled jobs. Quick-action buttons navigate to any section.

**Multi-Agent Mesh** -- Broker integration and peer registry let multiple agents communicate. Register peers, send messages through the broker, and monitor connectivity from the Agents view.

**Webhook Event Triggers** -- Register incoming webhooks with optional HMAC signature validation. Webhooks trigger workflows on external events (GitHub pushes, payment notifications, CI results) as an alternative to polling on a schedule.

**System Logs** -- Terminal-style real-time log viewer with auto-scroll, pause/resume, level filtering (INFO/WARNING/ERROR/DEBUG), text search, and export to .txt.

**API Playground** -- Postman-style interactive endpoint tester with 17 pre-configured endpoints, query param inputs, JSON body editor, and response viewer with timing.

**Backup Manager** -- Create, restore, download, and delete memory snapshots. One-click brain export/import with file upload.

**Task Runner** -- Visual task queue with progress bars, duration tracking, cancel/retry actions, and live status updates.

**Integrations Hub** -- Central view of 12 services across 6 categories (Messaging, AI, Dev, Automation, Commerce, Finance) with connected/disconnected status and inline config.

**Onboarding Tour** -- 8-step guided walkthrough for first-time users with progress dots, skip option, and localStorage persistence.

**Keyboard Shortcuts** -- Press `?` for a full shortcut overlay. `N` toggles the notification center. `Ctrl+K` opens the command palette.

**Responsive Dashboard** -- The frontend works on any screen size. REST endpoints for chat and approvals work without WebSocket support.

**Reactive Workflow Engine** -- State-driven workflow execution inspired by the FlyWire fruit fly connectome and the Rete algorithm. Nodes fire when prerequisites are met, not in sequence. 180 lines, domain-agnostic, async-native. Forward chaining, refraction, priority-based conflict resolution, and SQLite-backed checkpoint recovery. Every workflow is a reactive graph, not a script.

**Evolution Engine** -- Compounding learning loop over a Complementary Learning Systems (CLS) substrate. Outcomes feed patterns, patterns generate hypotheses, hypotheses run experiments, experiments get measured and kept or reverted. Self-criticism, loop detection, and positive pattern reinforcement. Paired with the consolidation cycle (forgetting + consolidation are the two halves of the same system), the brain compounds across sessions rather than just growing.

**Pattern Bus** -- Cross-domain event infrastructure. Every state change across every domain emits a DomainEvent to a SQLite-backed event table. Aggregator nodes detect anomalies and correlations. The foundation for self-initiated agent reasoning.

**Inter-Agent Messaging** -- Redis-backed pub/sub protocol for multi-agent coordination. Publish/subscribe events, request-response between agents, broadcast to all. Deploy one agent locally and another on a VPS -- they communicate through the broker.

**Durable Workflow Execution** -- Step-level retry with SQLite persistence. If a workflow fails mid-step, it resumes from the last successful step on restart. No external dependencies -- built on top of the reactive engine's checkpoint store.

**Docker Production Build** -- Multi-stage Dockerfile (Python 3.11 + Node 20) produces a single container running the API, scheduler, and nginx-served frontend with Redis broker for inter-agent messaging. Health checks and automatic restarts included.

**Cross-platform Launchers** -- `start.sh` (Unix/Mac), `start.bat` (Windows), and `Makefile` targets for dev, install, build, docker, and clean.

**Tool Hooks** -- `@before` and `@after` interceptors on any tool or LLM call. Log, filter, enforce policy, or transform inputs/outputs without touching the tool itself.

**33 Event Types** -- Full lifecycle observability. Agent, team, flow, tool, memory, and system events fire automatically. Subscribe to any event for custom reactions.

**Tiered Retrieval (smart_search)** -- Queries route through category summaries, SelRoute reranking, memory deduplication, explainable recall, and salience scoring. The best result path is chosen automatically.

**Experiment & Regression Testing** -- Built-in A/B framework for workflows, retrieval strategies, and agent configurations. Track metrics, compare variants, detect regressions.

**15 Built-in Tools** -- Ready-to-use tools for common agent tasks, available to any agent or workflow.

**8 LLM Adapters** -- Connect to Claude, GPT, Ollama, and more through a unified adapter interface.

**Security Hardened** -- Full audit with 24 findings fixed. ShellTool allowlisted, PythonREPL sandboxed, SSRF protection, CSRF middleware, secrets vault with Fernet encryption.

---

## Why AIBrain Gets Stronger Over Time

Local inference cost falls roughly 2x every 18 months -- the hardware curve Jensen Huang describes every time NVIDIA ships a new generation. AIBrain's SelRoute routing layer sits directly on top of that curve. Today, SelRoute sends cheap, deterministic tasks (summarization, classification, formatting) to local models at $0. As each hardware generation arrives, the boundary of what counts as "cheap enough to run locally" expands. More task types tip over into the $0 tier without any change to AIBrain's code.

The compounding effect runs in three layers. First, the brain itself (memory + learned patterns) gets more valuable as it accumulates sessions -- that growth is independent of models. Second, SelRoute's routing decisions improve as it learns which task types belong on which tier. Third, as local models improve, the routing intelligence can delegate harder tasks locally, shrinking cloud spend further.

This means AIBrain is one of the few software products with a passive architectural moat tied to hardware progress. Every chip generation is a de facto product upgrade. We don't need to build better models. We route to whoever does.

---

## Architecture

```
                          +------------------+
                          |     Browser      |
                          +--------+---------+
                                   |
                          HTTP / WebSocket
                                   |
                     +-------------+-------------+
                     |         nginx :5173        |
                     |   static frontend + proxy  |
                     +-------------+-------------+
                                   |
                            /api/* | /ws/*
                                   |
    +------------------------------+------------------------------+
    |                  FastAPI Backend :8001                       |
    |                                                             |
    |  +----------+  +-----------+  +-------+  +--------------+  |
    |  | Memory   |  | Reactive  |  | Chat  |  | Evolution    |  |
    |  | (SQLite  |  | Engine    |  | (WS)  |  | Engine       |  |
    |  |  + FTS5) |  | (Rete +   |  |       |  | (self-       |  |
    |  |          |  |  FlyWire) |  |       |  |  improving)  |  |
    |  +----+-----+  +-----+-----+  +---+---+  +------+-------+  |
    |       |              |             |             |           |
    |  +----+-----+  +----+------+  +---+---+  +------+-------+  |
    |  | Approval |  | Workflows |  | Agent |  | Pattern Bus  |  |
    |  | Queue    |  | (53)      |  | Broker|  | (events)     |  |
    |  +----------+  +-----------+  +---+---+  +--------------+  |
    +--------------------------------------------+-+--------------+
                |              |                 |
          aibrain.db    scheduled_jobs.json   Redis :6379
```

---

## Configuration

Copy `config.json.example` to `config.json` and fill in the values you need. Empty strings use sensible defaults. All fields are optional -- configure only what you use.

Three tiers of configuration (in priority order):

1. **Environment variables**: `AIBRAIN_MEMORY_DB`, `AIBRAIN_CRON_JOBS`, etc.
2. **config.json**: Copy `config.json.example` and customize
3. **Defaults**: Relative to the AIBrain root directory

### Key Fields

| Field | Description |
|---|---|
| `llm_provider` | LLM routing: `auto`, `claude`, `openai`, `ollama`, or `claude_cli` |
| `user_name` | Your name (shown in greeting and reports) |
| `budget_daily` | Daily LLM spending limit in USD (e.g., `5.00`) |
| `budget_monthly` | Monthly LLM spending limit in USD (e.g., `50.00`) |
| `agent_name` | Display name for your agent |
| `anthropic_api_key` | API key for Claude-powered workflows |
| `openai_api_key` | API key for OpenAI/GPT-powered workflows |
| `ollama_url` | Ollama API URL (default: `http://localhost:11434`) |
| `memory_db` | Path to SQLite memory database (default: `memory/memory.db`) |
| `cron_jobs` | Path to scheduled jobs registry (default: `scheduled_jobs.json`) |
| `skills_dir` | Directory containing agent skills |
| `email_accounts` | Array of email accounts to monitor (IMAP/SMTP config per account) |
| `email_reply_mode` | `"prompted"` (queue for approval) or `"auto"` |
| `email_auto_reply_senders` | Whitelist of senders for auto-reply mode |
| `location_name`, `location_lat`, `location_lon` | Location for weather workflows |
| `calendar_url`, `google_calendar_credentials` | Calendar integration |
| `github_token`, `github_username` | GitHub API access |
| `watched_repos`, `github_repos` | Repos to monitor for activity |
| `rss_feeds` | RSS/Atom feed URLs for the digest workflow |
| `arxiv_queries` | Search terms for the arXiv paper tracker |
| `job_keywords` | Keywords for job search workflow |
| `social_keywords` | Brand/topic terms for social listening |
| `price_watchlist` | Crypto/stock tickers with type, id, and display name |
| `habits` | Habits to track in the evening check-in |
| `scan_dirs` | Directories for file declutter workflow |
| `document_watch_dirs` | Directories for document parsing workflow |
| `uptime_targets` | URLs to monitor with expected HTTP status codes |

---

## Workflows

{{aibrain.workflow_count}} workflows span six registries plus standalone scripts in the `workflows/` directory. Enable any workflow with a single toggle in the dashboard or by setting `"enabled": true` in `scheduled_jobs.json`.

### Registry Overview

| Registry | File | Focus |
|---|---|---|
| **Core** | `aibrain_workflows.py` | Productivity, research, communication, devops |
| **Skill** | `aibrain_skill_workflows.py` | Consolidation cycle, code review, security, intelligence |
| **User** | `aibrain_user_workflows.py` | Personal intelligence, career, knowledge, proactive alerts |
| **Content** | `aibrain_content_workflows.py` | Multi-platform content generation and scheduling |
| **Ops** | `aibrain_ops_workflows.py` | Multi-agent ops, business metrics, compliance |
| **Scripts** | `workflows/` directory | Standalone scripts (evolution, job search, memory, SEO) |

### Core Workflows

Productivity, research, communication, and developer tooling.

| Workflow | Category | Description |
|---|---|---|
| `daily_planner` | Productivity | Task plan from calendar, emails, and priorities |
| `habit_tracker` | Productivity | Evening habit check-in and streak tracking |
| `file_declutter` | Productivity | Clean up Downloads and Desktop folders |
| `rss_digest` | Research | Summarize new articles from RSS feeds |
| `arxiv_tracker` | Research | Track new papers matching your queries |
| `price_watcher` | Research | Crypto/stock price alerts on significant moves |
| `email_triage` | Communication | Check all email accounts, categorize messages |
| `email_auto_reply` | Communication | Draft and send replies (prompted or auto mode) |
| `github_digest` | Developer | Summarize GitHub activity across your repos |
| `uptime_monitor` | Developer | Check service availability, alert on downtime |

### Skill Workflows

Consolidation cycle (CLS slow-learning pathway), code analysis, security assessment, and research automation.

| Workflow | Category | Description |
|---|---|---|
| `skill_sharpener` | Consolidation Cycle | Identify and practice weak skill areas |
| `knowledge_gap_detector` | Consolidation Cycle | Find gaps in the agent's knowledge base |
| `code_review_runner` | Code & Architecture | Automated code review with actionable feedback |
| `tech_debt_analyzer` | Code & Architecture | Track and prioritize technical debt |
| `security_assessment_runner` | Security | Run security assessments against targets |
| `threat_model_generator` | Security | Generate threat models for systems |
| `autonomous_researcher` | Intelligence | Deep-dive research on any topic |
| `trend_detector` | Intelligence | Detect emerging trends from data sources |

### User Workflows

Personal intelligence, professional development, knowledge synthesis, and proactive alerts.

| Workflow | Category | Description |
|---|---|---|
| `decision_journal` | Personal | Track decisions and outcomes over time |
| `idea_incubator` | Personal | Develop and refine ideas with structured prompts |
| `resume_updater` | Professional | Auto-update resume from recent achievements |
| `skill_gap_analyzer` | Professional | Identify skills to develop for career goals |
| `cross_domain` | Knowledge | Find connections across different knowledge domains |
| `mental_models` | Knowledge | Build and apply mental models to problems |
| `opportunity_detector` | Proactive | Surface opportunities from memory patterns |
| `burnout_detector` | Proactive | Early warning signs from activity patterns |

### Content Workflows

Multi-platform content generation and performance tracking.

| Workflow | Category | Description |
|---|---|---|
| `linkedin_content` | Platform | Generate LinkedIn posts from memory and context |
| `x_twitter_content` | Platform | Generate X/Twitter threads and posts |
| `youtube_content` | Platform | Generate YouTube scripts and descriptions |
| `content_batch_scheduler` | Orchestration | Schedule content across all platforms weekly |
| `content_performance` | Orchestration | Analyze content metrics and optimize strategy |

### Ops Workflows

Multi-agent coordination, operational health, and business metrics.

| Workflow | Category | Description |
|---|---|---|
| `brain_sync_checker` | Multi-Agent | Verify brain consistency across agents |
| `task_handoff_monitor` | Multi-Agent | Track task handoffs between agents |
| `workflow_health` | Ops | Monitor workflow execution health and failures |
| `session_handover` | Ops | Generate session handover reports |
| `credential_rotation` | Security | Remind when credentials need rotation |
| `package_downloads` | Business | Track package download metrics |
| `launch_readiness` | Business | Pre-launch checklist validation |

### Custom Workflows

User-defined workflows for domain-specific automation. Create your own workflow files in the `workflows/` directory. Examples include agent mesh operations, security scanning, analytics, and business tracking.

| Workflow | Category | Description |
|---|---|---|
| `agent_sync_checker` | Agent Mesh | Verify sync state across all agents |
| `broker_health` | Agent Mesh | Monitor inter-agent broker connectivity |
| `bt_comparator` | Security | Compare scan results across runs |
| `finding_trends` | Security | Analyze vulnerability finding trends |
| `custom_metrics` | Analytics | Track custom metrics and KPIs |
| `data_reconciler` | Analytics | Reconcile data across sources |
| `pypi_tracker` | Business | Track PyPI package downloads |
| `ship_readiness` | Business | Product ship-readiness checklist |

Run `aibrain workflows` to see the complete list with schedules and status.

All core workflows use **free APIs only**: Open-Meteo, CoinGecko, ArXiv, Hacker News, Reddit JSON. No paid API keys required for basic operation.

---

## API Endpoints

The backend exposes a REST API at `http://localhost:8001`. All resource endpoints are prefixed with `/api/agent` unless noted otherwise.

### Memory

| Method | Endpoint | Description |
|---|---|---|
| GET | `/api/agent/memories` | List memories (paginated) |
| GET | `/api/agent/memories/search?q=` | Full-text search across all memories |
| GET | `/api/agent/memories/summary` | Memory count and type breakdown |
| GET | `/api/agent/memories/graph` | Graph nodes and edges for visualization |
| GET | `/api/agent/memories/export` | Export entire brain as JSON |
| POST | `/api/agent/memories/import` | Import brain from JSON |
| GET | `/api/agent/memories/{id}` | Get a single memory |
| POST | `/api/agent/memories` | Create a new memory |
| PUT | `/api/agent/memories/{id}` | Update a memory |
| DELETE | `/api/agent/memories/{id}` | Delete a memory |
| POST | `/api/agent/memories/extract-entities` | Backfill entity extraction on all memories |

### Workflows and Scheduler

| Method | Endpoint | Description |
|---|---|---|
| GET | `/api/agent/crons` | List all scheduled jobs |
| PUT | `/api/agent/crons/{name}` | Update a job's schedule or config |
| GET | `/api/agent/crons/upcoming` | Next-fire times for all jobs |
| GET | `/api/agent/crons/groups` | Jobs grouped by category |
| GET | `/api/agent/workflows` | List all available workflows |
| GET | `/api/agent/workflows/{name}` | Get workflow details and source |
| POST | `/api/agent/workflows/{name}/enable` | Enable a workflow |
| POST | `/api/agent/workflows/{name}/disable` | Disable a workflow |
| POST | `/api/agent/workflows/{name}/run` | Trigger a workflow immediately |
| GET | `/api/agent/scheduler/status` | Scheduler health and job count |
| POST | `/api/agent/scheduler/reload` | Reload all jobs from disk |

### Approval Queue

| Method | Endpoint | Description |
|---|---|---|
| GET | `/api/agent/approvals` | List items (filter by status/category) |
| GET | `/api/agent/approvals/summary` | Pending count and category breakdown |
| GET | `/api/agent/approvals/{id}` | Get a single approval item |
| POST | `/api/agent/approvals` | Create an approval request |
| POST | `/api/agent/approvals/{id}/approve` | Approve an item |
| POST | `/api/agent/approvals/{id}/reject` | Reject an item |

### Chat

| Method | Endpoint | Description |
|---|---|---|
| WebSocket | `/ws/chat` | Real-time chat with history replay on connect |
| GET | `/api/agent/chat/history` | Recent chat messages (REST fallback) |
| POST | `/api/agent/chat/send` | Send a chat message (REST fallback) |

### Agent Mesh

| Method | Endpoint | Description |
|---|---|---|
| GET | `/api/agent/status` | Agent status and system info |
| GET | `/api/agent/activity` | Activity timeline |
| POST | `/api/agent/activity` | Log an activity event |
| GET | `/api/agent/peers` | List registered peer agents |
| POST | `/api/agent/peers` | Register a new peer |
| DELETE | `/api/agent/peers/{name}` | Remove a peer |
| POST | `/api/agent/broker/send` | Send a message through the broker |
| GET | `/api/agent/broker/recv` | Receive messages from the broker |
| GET | `/api/agent/broker/status` | Broker connectivity status |

### Webhooks

| Method | Endpoint | Description |
|---|---|---|
| GET | `/webhooks` | List registered webhooks |
| POST | `/webhooks` | Register a new webhook |
| PUT | `/webhooks/{name}` | Update a webhook configuration |
| DELETE | `/webhooks/{name}` | Remove a webhook |
| POST | `/webhooks/{name}` | Invoke a webhook (external callers) |
| GET | `/webhooks/{name}/history` | Invocation history for a webhook |

### Search

| Method | Endpoint | Description |
|---|---|---|
| GET | `/api/agent/search?q=` | Unified search across memories, workflows, and skills |

### Skills

| Method | Endpoint | Description |
|---|---|---|
| GET | `/api/agent/skills` | List installed agent skills |
| GET | `/api/agent/skills/{name}` | Get skill details |
| POST | `/api/agent/skills/install` | Install a skill from URL or pasted content |
| DELETE | `/api/agent/skills/{name}` | Remove an installed skill |

### Configuration

| Method | Endpoint | Description |
|---|---|---|
| GET | `/api/agent/config` | Get current configuration |
| PUT | `/api/agent/config` | Update configuration |
| POST | `/api/agent/config/test-provider` | Test LLM provider connectivity |

### Tasks

| Method | Endpoint | Description |
|---|---|---|
| GET | `/api/agent/tasks` | List all tasks |
| POST | `/api/agent/tasks` | Create a new task |
| POST | `/api/agent/tasks/{id}/cancel` | Cancel a running task |
| POST | `/api/agent/tasks/{id}/retry` | Retry a failed task |

### Backups

| Method | Endpoint | Description |
|---|---|---|
| GET | `/api/agent/backups` | List all backups |
| POST | `/api/agent/backups` | Create a new backup |
| POST | `/api/agent/backups/{id}/restore` | Restore from a backup |
| GET | `/api/agent/backups/{id}/download` | Download a backup file |
| DELETE | `/api/agent/backups/{id}` | Delete a backup |

### Logs and Costs

| Method | Endpoint | Description |
|---|---|---|
| GET | `/api/agent/logs` | Structured log viewer |
| GET | `/api/agent/costs/summary` | LLM cost summary |
| GET | `/api/agent/costs/history` | Daily cost breakdown |

### System

| Method | Endpoint | Description |
|---|---|---|
| GET | `/health` | Health check with scheduler and approval status |
| GET | `/health/deep` | Deep health check -- tests all subsystems |
| WebSocket | `/ws/notifications` | Real-time push notifications for approvals and activity |

---

## Docker Deployment

The included `docker-compose.yml` runs the full stack: app server, frontend, and Redis broker.

```bash
# Copy and configure environment
cp env.example .env   # edit with your API keys

# Build and start
docker compose up --build -d

# View logs
docker compose logs -f aibrain

# Stop
docker compose down
```

The stack exposes:

- **5173** -- Dashboard UI (nginx serving the built React app)
- **8001** -- Backend API (FastAPI + uvicorn)
- **6379** -- Redis broker (inter-agent messaging)

Volumes persist your data across restarts:

- `./data` -- SQLite databases (aibrain.db)
- `./memory` -- Legacy memory database
- `./config.json` -- Your configuration
- `./workflows` -- Workflow scripts

For VPS-only deployment (no frontend, agent-to-agent only):

```bash
docker compose -f docker-compose.vps.yml up -d
```

The Dockerfile uses a multi-stage build: Python 3.11 for the backend, Node 20 for the frontend build, and a final slim image with nginx for production serving. Health checks run every 30 seconds against the `/api/health` endpoint.

---

## Boss Agent Orchestrator (Pro/Team)

Run multiple AI agent workers in parallel, each in its own isolated Docker container, coordinated by a single boss process on the host. Workers execute tasks independently -- shell commands, Python scripts, or full Playwright browser automation -- and merge their results back into one shared brain. More workers means faster knowledge accumulation.

```
Boss (host) ──→ Redis ──→ Worker 1 (Docker + Playwright)
                      ──→ Worker 2 (Docker + Playwright)
                      ──→ Worker N
              ←── Session merge ←── All workers feed one brain
```

### Quick Start

```bash
docker compose -f docker-compose.boss.yml up -d --scale worker=2
```

### CLI

```bash
python -m boss.boss assign "Research competitor pricing"   # assign a task
python -m boss.boss status                                 # check worker status
python -m boss.boss results                                # collect completed results
python -m boss.boss scale 4                                # scale to 4 workers
python -m boss.boss teardown                               # stop all workers
```

### Worker Types

| Type | Description |
|---|---|
| **Shell** | Execute shell commands in an isolated container |
| **Python** | Run Python scripts with full library access |
| **Playwright** | Browser automation -- login flows, scraping, form filling, screenshots |

### Features

- **Communication** -- Workers report progress, ask questions, and broadcast to peers through Redis channels
- **Budget guards** -- Token limits and iteration caps prevent runaway costs per task
- **Session merge** -- Results from all workers deduplicate and merge into the host brain, ranked by score
- **Evolution bridge** -- Worker outcomes feed the Evolution Engine's skill learning loop
- **Checkpoint and replay** -- Tasks checkpoint progress; a crashed worker resumes from its last checkpoint
- **Model routing** -- Complex reasoning routes to Opus, standard tasks to Sonnet, monitoring to Haiku

### Pricing

| Tier | Workers | Scope |
|---|---|---|
| **Free** | 1 instance | Single machine |
| **Pro** | Up to 3 workers | Single machine |
| **Team** | Up to 10 workers | Cross-machine coordination |

---

## MCP Server -- Connect Any AI Agent

AIBrain includes an MCP (Model Context Protocol) server that gives any compatible AI agent persistent memory with selective routing. Connect Claude Code, Cursor, Windsurf, or any MCP client.

**Performance:** #1 on LongMemEval benchmark (Ra@5=0.789, NDCG@5=0.796) -- beats Contriever (110M params) and Stella V5 (1.5B params) with a 22MB model + rule-based routing.

### Setup

Add to your `.claude.json` or `.mcp.json`:

```json
{
  "mcpServers": {
    "aibrain-memory": {
      "command": "python",
      "args": ["/path/to/aibrain/mcp_server.py"]
    }
  }
}
```

### Three Modes

| Mode | Config | What Ships | Performance |
|------|--------|-----------|-------------|
| **No ML** | `AIBRAIN_EMBEDDING_MODEL=none` | FTS5 only, 0 deps | NDCG 0.692 |
| **Default** | (no config needed) | MiniLM 22MB, 384-dim | Ra@5 0.789, NDCG 0.796 |
| **bge-base** | `AIBRAIN_EMBEDDING_MODEL=BAAI/bge-base-en-v1.5` | 110MB, 768-dim | Ra@5 0.791, NDCG 0.812 |
| **Custom** | `AIBRAIN_EMBEDDING_MODEL=your/model` | Any sentence-transformer | Routing adapts |

### Tools Provided

- `memory_store` -- Store a memory (auto-enriched for better retrieval)
- `memory_search` -- Search with selective routing (auto-detects query type, or pass `search_type` hint)
- `memory_recall` -- Recall top memories by importance

### Dependencies

```bash
pip install mcp sentence-transformers sqlite-vec
```

All optional -- the server gracefully degrades. No embeddings? FTS5 only. No enricher? Raw content. No sqlite-vec? Skip vector search.

---

## Development Setup

The fastest way to start both backend and frontend:

```bash
# Windows
start.bat

# Unix/Mac
./start.sh

# Or using Make
make dev
```

To run manually without the launchers:

```bash
# Backend
cd backend
pip install -r requirements.txt
python -m uvicorn main:app --host 0.0.0.0 --port 8001 --reload

# Frontend (separate terminal)
cd frontend
npm install
npm run dev
```

Other Make targets: `make install`, `make build`, `make docker`, `make clean`.

The frontend dev server runs on port 5173 and proxies API requests to port 8001.

---

## Connect Your Agent

AIBrain works with any AI agent -- Claude, GPT, Gemini, Ollama, or your own framework.

### Option 1: Just tell your agent

Paste this to any AI agent (Claude Code, ChatGPT, etc.):

```
You have access to AIBrain at http://localhost:8001
Store memories: POST /api/agent/memories  Body: {"name": "...", "content": "...", "type": "reference", "tags": ""}
Search memories: GET /api/agent/memories/search?q=your+query
Send chat: POST /api/agent/chat/send  Body: {"content": "message", "role": "agent"}
Dashboard: http://localhost:5173
```

### Option 2: Python SDK

```python
from aibrain_sdk import AIBrain

agent = AIBrain("http://localhost:8001")
agent.remember("user prefers dark mode", tags=["preference"])
results = agent.recall("dark mode")
agent.say("Task complete.")
approval = agent.request_approval("Deploy to production")
```

### Option 3: Example agents

```bash
# Claude
python examples/claude_agent.py

# OpenAI GPT
python examples/openai_agent.py

# Local LLM (Ollama -- no API key needed)
python examples/ollama_agent.py
```

The SDK uses only Python stdlib (urllib) -- zero dependencies. See `aibrain_sdk.py` for the full API.

---

## Remote Access

Open `http://<your-machine-ip>:5173` from any browser on your network. The responsive design adapts to any screen size. For remote access over the internet, use an SSH tunnel or put behind a reverse proxy with auth.

---

## Brain Export and Import

Transfer your agent's entire memory to another machine:

```bash
# Export
curl http://localhost:8001/api/agent/memories/export > brain.json

# Import on new machine
curl -X POST http://localhost:8001/api/agent/memories/import \
  -H "Content-Type: application/json" \
  -d @brain.json
```

Duplicates are automatically skipped during import.

---

## Project Structure

```
aibrain/
├── aibrain_sdk.py            # Python SDK — connect any agent
├── setup_wizard.py           # Interactive first-run setup
├── docker-compose.yml        # One-command Docker deploy
├── Dockerfile                # Multi-stage production build
├── Makefile                  # dev, install, build, docker, clean
├── start.sh                  # Unix/Mac launcher
├── start.bat                 # Windows launcher
├── config.json.example       # Configuration template
├── scheduled_jobs.json       # Workflow schedules
├── backend/
│   ├── main.py               # FastAPI server + scheduler + chat + WS
│   ├── aibrain_api.py        # Memory, cron, workflow, skill, search APIs
│   ├── chat_responder.py     # Multi-provider LLM chat (Claude/GPT/Ollama/CLI)
│   ├── llm_router.py         # litellm-backed unified model routing
│   ├── entity_extractor.py   # Pattern-based NER for knowledge graph
│   ├── trace_logger.py       # LLM call tracing with cost tracking
│   ├── action_executor.py    # Approved action execution engine
│   ├── workflow_generator.py # Natural language to workflow script
│   ├── webhooks.py           # Webhook registration and invocation
│   ├── scheduler.py          # APScheduler cron engine
│   ├── approval_queue.py     # Human-in-the-loop approval system
│   └── requirements.txt      # Python dependencies
├── frontend/
│   ├── src/App.jsx           # Router + 28-component navigation
│   └── src/components/
│       ├── HomeDashboard.jsx      # System overview with sparklines
│       ├── MemoryDashboard.jsx    # Memory search, browse, edit
│       ├── MemoryGraph.jsx        # Force-directed knowledge graph
│       ├── CronDashboard.jsx      # Workflow scheduling and control
│       ├── ChatPanel.jsx          # WebSocket chat with commands
│       ├── ApprovalQueue.jsx      # Human-in-the-loop approvals
│       ├── AgentStatus.jsx        # Agent health and peer mesh
│       ├── ActivityTimeline.jsx   # Real-time activity feed (WS)
│       ├── WorkflowBuilder.jsx    # Visual drag-and-drop workflow editor
│       ├── WorkflowLibrary.jsx    # Content pipeline and library
│       ├── WebhooksDashboard.jsx  # Webhook management
│       ├── CostDashboard.jsx      # LLM cost tracking and budgets
│       ├── MCPHub.jsx             # MCP server management
│       ├── SkillMarketplace.jsx   # Skill browse, install, delete
│       ├── Settings.jsx           # Config, provider testing, health
│       ├── SetupWizard.jsx        # First-run guided setup
│       ├── CommandPalette.jsx     # Global Ctrl+K search overlay
│       ├── NotificationCenter.jsx # Real-time notification panel
│       ├── KeyboardShortcuts.jsx  # Shortcut overlay (press ?)
│       ├── SystemLogs.jsx         # Terminal-style log viewer
│       ├── ApiPlayground.jsx      # Interactive API tester
│       ├── BackupManager.jsx      # Memory backup/restore
│       ├── TaskRunner.jsx         # Task queue with progress
│       ├── Integrations.jsx       # Service connection hub
│       ├── OnboardingTour.jsx     # First-run walkthrough
│       ├── ErrorBoundary.jsx      # React error boundary
│       ├── MobileLayout.jsx       # Mobile-optimized layout
│       ├── MobilePairing.jsx      # QR code pairing for mobile
│       └── Toast.jsx              # Toast notification component
├── examples/
│   ├── basic_agent.py        # Minimal agent example
│   ├── claude_agent.py       # Claude-powered agent
│   ├── openai_agent.py       # GPT-powered agent
│   └── ollama_agent.py       # Local LLM agent (no API key)
├── workflows/                # Self-contained workflow scripts
└── memory/
    └── memory.db             # SQLite + FTS5 database
```

---

## Profile System

Manage multiple brain configurations for different use cases:

```bash
aibrain profile list                              # list all profiles with mode and memory count
aibrain profile create research --mode isolated   # create with dedicated brain DB
aibrain profile create team --mode hive           # create with shared brain DB
aibrain profile info research                     # show profile details
aibrain profile delete research --confirm         # unregister a profile
```

**Two modes:**

| Mode | Description |
|------|-------------|
| **Hive** | Shared brain DB across profiles. Use when spreading work across multiple agent instances that should share memories. |
| **Isolated** | Dedicated brain DB per profile (`aibrain_<name>.db`). Use for training specialist brains or marketplace packaging. |

Hive profiles share the primary brain, skills, agents, and rules via filesystem junctions. Isolated profiles get their own database at the data directory level.

---

## Interactive Settings

A single menu to configure every tunable in AIBrain:

```bash
aibrain settings    # opens the master settings menu
```

Nine sub-menus, each showing live status:

| Sub-menu | What it controls |
|----------|-----------------|
| Compression | Per-type token compression toggles (8 categories) |
| Learning | Signal detection thresholds and minimum message length |
| Model routing | LLM provider selection and task-type routing rules |
| Workflows | Enable/disable workflows with live count |
| Setup | Reconfigure individual setup wizard sections |
| Brain sync | Push/pull direction and table selection |
| Evolution | Auto-evolution toggle and interval |
| Agents | Company agent roster management |
| Schedule | Cron job configuration |

Arrow keys or number keys to navigate, Enter to open a sub-menu, Q or Escape to go back.

---

## Brain Sync

Full brain portability across machines and agents. Export covers 23 tables -- not just memories, but companies, agents, skills, workflows, evolution history, entities, approvals, permissions, and schedules:

```bash
aibrain brain export                   # export full brain to JSON
aibrain brain export --output my.json  # export to specific file
aibrain brain import brain.json        # import another brain (deduplicates automatically)
aibrain brain merge /path/or/git-url   # merge from git repo or local path
aibrain brain stats                    # show table counts and brain size
```

Credentials are automatically scrubbed during export. The sync format is JSON, safe to commit to a private repo for backup or transfer between machines.

---

## Token Compression

`aibrain-compress` reduces CLI output before it reaches your agent, saving 40-99% of tokens depending on command type. Pure Python, zero telemetry, zero network calls.

```bash
# Wrap mode -- run and compress in one call
aibrain-compress git status
aibrain-compress pytest
aibrain-compress cargo build

# Pipe mode -- used by the shell hook
git diff | aibrain-compress -- git diff

# Install shell hook (auto-compresses git, pytest, cargo, etc.)
aibrain-compress init

# Manage per-type toggles
aibrain settings    # then select Compression
```

**Eight filter categories:**

| Category | Commands | Typical savings |
|----------|----------|-----------------|
| Git | status, diff, log, push, pull, fetch, stash | 50-87% |
| Test | pytest, vitest, jest, cargo test | 90-99% |
| Build | tsc, cargo build, eslint, prettier, next build | 40-80% |
| Docker | docker build, docker compose | 70%+ |
| Pip | pip install, pip list | 70%+ |
| Env | env, printenv | 60%+ |
| JSON | cat on JSON files | 60%+ |
| Traceback | Python tracebacks | 50%+ |

Each category can be toggled independently via `aibrain settings` or environment variables (`AIBRAIN_COMPRESS_DISABLE_GIT`, etc.).

**Python API:**
```python
from aibrain.tools.compress import compress
compressed = compress(command=['git', 'status'], output=raw_output)
```

---

## Brain Fitness

A single objective score (0.0--1.0) that measures how well your brain is performing:

```bash
aibrain fitness                # current score with trend
aibrain fitness --window 60    # use 60-day window instead of default 30
aibrain fitness --trend        # show score progression over time
```

The fitness score combines execution success rate, quality scores, and learning velocity. Scores are interpreted as: 0.8+ excellent, 0.6+ good, 0.4+ developing, below 0.4 needs attention.

---

## Product Metrics

Operationalize your agent's five core claims with measured data from the live database:

```bash
aibrain metrics    # print the full product metrics dashboard
```

Tracks: consolidation rate (decreasing correction rate over time — the brain compounds as CLS slow-learning extracts lessons from fast-episodic traces), memory growth, workflow execution health, consolidation cycle effectiveness, and retrieval quality. Each metric shows actual measured values, not estimates.

---

## State Predictor

Predict which workflow will run next based on execution history patterns:

```bash
aibrain next       # predict top 3 most likely next workflows
aibrain next 5     # predict top 5
```

Uses frequency-based transition modeling with time-of-day and day-of-week weighting. No ML -- just counted transitions weighted by recency. Each prediction includes a confidence score and reasoning.

---

## Correction Tracking

Track whether your brain is learning per domain by analyzing correction and feedback patterns:

```bash
aibrain corrections              # show correction trends across all domains
aibrain corrections --days 90    # use 90-day window
```

Classifies corrections by domain (security, git, deployment, config, testing, data, general), buckets into 7-day windows, and computes trend direction. A decreasing correction rate means the brain is learning.

---

## Interactive Approvals

An interactive terminal menu for reviewing and acting on queued approval requests:

```bash
aibrain approvals    # launch interactive approval menu
```

Items are grouped by category. Keyboard controls:

| Key | Action |
|-----|--------|
| Space | Toggle focused item |
| S | Select all in category |
| A | Approve all checked |
| R | Reject all checked |
| D | Detail view |
| G | Next category |
| P | Next page (if >15 items) |
| Up/Down | Navigate |
| 1-9 | Quick toggle |
| Q | Quit |

---

## Marketing Stats

Track download metrics and marketing performance:

```bash
aibrain marketing stats              # interactive dashboard
aibrain marketing stats --summary    # plain text summary (non-interactive)
aibrain marketing stats --sync       # refresh data from sources
```

---

## Pre-Check

A binary pass/fail gate to run before any public exposure (PyPI publish, blog post, open-source push):

```bash
aibrain pre-check                   # scan current repo
aibrain pre-check --path /repo      # scan a specific repo
```

Checks for: leaked credentials in public-facing files, internal tool names that should not appear in public content, missing license files, and other release-readiness criteria.

---

## Doctor

System health scanner that finds and fixes common problems:

```bash
aibrain doctor    # run full diagnostic
```

Scans for shadow databases (duplicate DBs created by different processes), rescues stranded memories from shadow DBs into the primary, fixes MCP server configuration, validates filesystem junctions, and runs enforcement constraint checks. Safe to run repeatedly -- it reports what it finds and fixes.

---

## CLI Reference

Complete list of `aibrain` commands:

### Setup and System

| Command | Description |
|---------|-------------|
| `aibrain setup` | First-time setup wizard (--auto for no prompts, --reconfigure to update) |
| `aibrain init` | Initialize the brain database |
| `aibrain start` | Restore stack, start services, install hooks |
| `aibrain status` | Show current stack state (crons, hooks, services) |
| `aibrain serve` | Start API backend + dashboard UI + open browser |
| `aibrain server` | Start FastAPI server only (no dashboard) |
| `aibrain update` | Safe self-update (backup, pull, validate, auto-config) |
| `aibrain version` | Print installed version |
| `aibrain doctor` | Scan for shadow DBs, rescue memories, fix MCP config |
| `aibrain health` | Brain health report (memories, quality, tier usage) |
| `aibrain fitness` | Brain fitness objective score and trend |
| `aibrain pre-check` | Pre-visibility checklist before public release |
| `aibrain settings` | Interactive master settings menu (9 sub-menus) |
| `aibrain models` | Show available local + remote models with routing |

### Memory and Brain

| Command | Description |
|---------|-------------|
| `aibrain summary` | Dashboard statistics |
| `aibrain search "query"` | Cross-table full-text search |
| `aibrain browse` | Open memory browser in default browser |
| `aibrain forget "topic"` | Preview deletions (add --confirm to execute) |
| `aibrain brain export` | Export full brain to JSON (23 tables, credentials scrubbed) |
| `aibrain brain import <file>` | Import brain from JSON (auto-deduplicates) |
| `aibrain brain merge <source>` | Merge from git URL or local path |
| `aibrain brain stats` | Brain table counts and size |
| `aibrain chat` | Interactive conversational mode with the brain |

### Workflows and Skills

| Command | Description |
|---------|-------------|
| `aibrain workflows list` | List all workflows with status |
| `aibrain workflows enable <n>` | Enable a workflow (auto-enables dependencies) |
| `aibrain workflows disable <n>` | Disable a workflow |
| `aibrain workflows enable --recommended` | Enable starter set |
| `aibrain workflows sync` | Create OS scheduled tasks (--dry-run to preview) |
| `aibrain workflows run <n>` | Run a workflow immediately |
| `aibrain workflows deps` | Show dependency map |
| `aibrain skills` | Show skill inventory with episode counts and trends |
| `aibrain packs` | Browse available brain packs (domain bundles) |
| `aibrain packs activate <n>` | Activate a pack and its workflows |

### Profiles and Configuration

| Command | Description |
|---------|-------------|
| `aibrain profile list` | List all profiles (hive/isolated) with memory counts |
| `aibrain profile create <name>` | Create a profile (--mode hive or isolated) |
| `aibrain profile info <name>` | Show profile details |
| `aibrain profile delete <name>` | Unregister a profile (requires --confirm) |
| `aibrain config get <key>` | Read a config value |
| `aibrain config set <key> <value>` | Write a config value |
| `aibrain create <name>` | Scaffold a new AIBrain project |

### Analytics and Diagnostics

| Command | Description |
|---------|-------------|
| `aibrain metrics` | Product metrics dashboard (5 measured claims) |
| `aibrain corrections` | Correction trends per domain (learning rate) |
| `aibrain next [N]` | Predict next N most likely workflows (default 3) |
| `aibrain audit [N]` | Show last N execution audit entries (default 20) |
| `aibrain marketing stats` | Download metrics and marketing performance |

### Agent Teams and Companies

| Command | Description |
|---------|-------------|
| `aibrain company create <name>` | Create a company (agent org structure) |
| `aibrain company list` | List companies |
| `aibrain agent list --company <id>` | List agents in a company |
| `aibrain agent hire <name> --company <id>` | Hire a new agent |
| `aibrain task create <title> --company <id>` | Create a task |
| `aibrain task list --company <id>` | List tasks |
| `aibrain approvals` | Interactive approval queue |
| `aibrain inbox --company <id>` | Show pending approvals |
| `aibrain approve <id>` | Approve a queued item (--note for comment) |
| `aibrain reject <id>` | Reject a queued item (--note for comment) |

### Brain History and Mesh

| Command | Description |
|---------|-------------|
| `aibrain history snapshot "msg"` | Snapshot current brain state |
| `aibrain history list` | List all snapshots |
| `aibrain history rollback <id>` | Roll back to a snapshot |
| `aibrain history branch <name>` | Branch brain for experiments |
| `aibrain history diff <a> [b]` | Compare brain versions |
| `aibrain mesh status` | Show mesh agent status |
| `aibrain mesh register <n> <db>` | Register agent in mesh |
| `aibrain mesh merge` | Merge all agents (consensus) |
| `aibrain mesh distribute` | Push merged brain to all agents |

### Stack Management

| Command | Description |
|---------|-------------|
| `aibrain stack save [name]` | Save named stack snapshot |
| `aibrain stack restore [name]` | Restore from snapshot |
| `aibrain stack rollback [N]` | Roll back N steps (default 1, keeps last 3) |
| `aibrain stack set-default` | Save current state as default |
| `aibrain stack list` | List saved snapshots |

### Training and Testing

| Command | Description |
|---------|-------------|
| `aibrain train <workflow> -n N` | Training loop with feedback (default 3 iterations) |
| `aibrain test -n N [workflows...]` | Automated quality evaluation (--save-baseline) |

### Marketplace

| Command | Description |
|---------|-------------|
| `aibrain marketplace package --name "Brain"` | Package brain for sale |
| `aibrain marketplace validate <file>` | Validate a .brain file |
| `aibrain marketplace publish <file>` | Publish to marketplace |
| `aibrain marketplace search "query"` | Search marketplace |
| `aibrain marketplace install <id>` | Install a brain |
| `aibrain marketplace revenue` | Revenue dashboard |

### Token Compression

| Command | Description |
|---------|-------------|
| `aibrain-compress <command>` | Run a command with compressed output |
| `aibrain-compress init` | Install shell hook (auto-compresses git, pytest, etc.) |
| `aibrain-compress uninstall` | Remove shell hook |
| `aibrain-compress config show` | Show effective compression config |

### Other

| Command | Description |
|---------|-------------|
| `aibrain license` | Show license status and tier |
| `aibrain license activate <key>` | Activate a license key |
| `aibrain license tiers` | Show pricing tiers |
| `aibrain mcp` | Start MCP server (stdio mode) |
| `aibrain tools` | List available built-in tools |
| `aibrain encrypt` | Encryption utilities |
| `aibrain attack search "query"` | Search ATT&CK techniques, malware, tools |
| `aibrain events` | Query the event bus |
| `aibrain scale stats` | Brain scaling health report |
| `aibrain scale compact` | Archive stale memories |

---

## Account Management

myaibrain.org ships with a full user account system on top of the AIBrain backend: email+password signup, optional OAuth through GitHub / Google / Microsoft, optional TOTP 2FA, and an account dashboard showing subscription tier, license key, and Stripe portal access.

### Model

- **One account per email.** Users can add multiple login methods to the same account: password, GitHub, Google, Microsoft.
- **Any method lands at the same dashboard** with the same license key and subscription state.
- **TOTP 2FA is optional** and works regardless of login method. Codes are RFC 6238 (SHA-1, 30s window, 6 digits) with ±1 step tolerance for clock skew — compatible with Google Authenticator, Authy, 1Password.
- **Passwords are bcrypt cost 12.** Sessions are 32-byte random tokens in HttpOnly Secure SameSite=Lax cookies, 30-day sliding expiry. Failed logins are rate-limited to 5 per email per 15 minutes.

### Endpoints

All under the `/auth/*` prefix on the backend:

| Method | Path | What it does |
|---|---|---|
| POST | /auth/signup | email + password, bcrypt, returns session cookie |
| POST | /auth/login | email + password (+ optional TOTP), returns session cookie |
| POST | /auth/logout | deletes the current session |
| POST | /auth/password/reset/request | sends a reset link via SMTP (15-min single-use token) |
| POST | /auth/password/reset/confirm | accepts token, sets a new password, invalidates existing sessions |
| GET  | /auth/verify-email?token=... | marks the email as verified |
| GET  | /auth/oauth/providers | returns which OAuth providers are configured (frontend hides disabled buttons) |
| GET  | /auth/oauth/{provider}/start | 302 to the provider with PKCE + state + nonce |
| GET  | /auth/oauth/{provider}/callback | completes the flow, creates or links the account, issues a session |
| POST | /auth/oauth/{provider}/link | initiates a link flow for the logged-in user (returns authorize URL) |
| POST | /auth/2fa/setup | generates a TOTP secret + `otpauth://` URI, requires password reauth |
| POST | /auth/2fa/verify | verifies the first code, flips `totp_enabled` on |
| POST | /auth/2fa/disable | disables TOTP, requires password reauth |
| GET  | /auth/me | current user, linked methods, 2FA status, subscription tier + masked license key |

### OAuth setup for self-hosting

OAuth providers are enabled only when both the client ID **and** client secret environment variables are set. Missing variables cause the button to be hidden; nothing errors.

```bash
# GitHub — register at https://github.com/settings/developers
export GITHUB_OAUTH_CLIENT_ID=...
export GITHUB_OAUTH_CLIENT_SECRET=...

# Google — console.cloud.google.com -> APIs & Services -> Credentials
export GOOGLE_OAUTH_CLIENT_ID=...
export GOOGLE_OAUTH_CLIENT_SECRET=...

# Microsoft — portal.azure.com -> App registrations
export MICROSOFT_OAUTH_CLIENT_ID=...
export MICROSOFT_OAUTH_CLIENT_SECRET=...

# Where OAuth callbacks come back to (used to build redirect_uri)
export AIBRAIN_BASE_URL=https://api.myaibrain.org

# Where the frontend lives (used for post-login redirects and email links)
export MYAIBRAIN_SITE_URL=https://myaibrain.org
```

For each provider, register the callback URL as `{AIBRAIN_BASE_URL}/auth/oauth/{provider}/callback`.

### SMTP for email verification and password reset

```bash
export SMTP_HOST=smtp.gmail.com
export SMTP_PORT=587
export SMTP_USER=you@example.com
export SMTP_PASSWORD=an-app-password
export SMTP_FROM=noreply@myaibrain.org
```

When SMTP is not configured, signup and reset requests still succeed — the email simply isn't sent, a warning is logged, and the user can retry once SMTP is available.

### Frontend pages

The `sindecker/myaibrain-site` repo has matching static pages:

- `/signup` — email/password + OAuth buttons
- `/login` — email/password + OAuth + 2FA challenge
- `/account` — dashboard showing email, linked methods, 2FA status, tier, masked license key, valid_until, buttons for Stripe portal / cancel / 2FA / link / change password
- `/password-reset` and `/password-reset/confirm`
- `/verify-email`
- `/2fa/setup`

All pages use the existing dark theme and client-side validation. Server-side validation is always the source of truth.

### Database tables

All live in `aibrain.db` and are created by `_migrate_schema` so existing installs pick them up on next start. The tables: `users` (with `password_hash`, `totp_secret`, `totp_enabled`, `email_verified`, `last_login_at` added as new columns if the RBAC schema was pre-existing), `oauth_links` (composite PK on user_id+provider), `sessions`, `password_reset_tokens`, `email_verification_tokens`, `license_state` (per-user subscription state written by the Stripe webhook), `login_attempts` (for rate limiting), and `oauth_states` (short-lived PKCE+state+nonce storage).

---

## Keyboard Shortcuts

| Shortcut | Action |
|---|---|
| `Ctrl+K` | Open command palette |
| `?` | Show keyboard shortcuts overlay |
| `N` | Toggle notification center |
| `>query` | Search memories, workflows, and skills from palette |
| `Escape` | Close any modal or palette |
| `Arrow keys` | Navigate command palette results |
| `Enter` | Select highlighted palette result |

---

## License

Proprietary. All rights reserved. See [LICENSE](LICENSE) for details.

For support, visit [myaibrain.org](https://myaibrain.org).
