# MindRoom

> AI agents that live in Matrix and work everywhere via bridges

# MindRoom Docs

# MindRoom

AI agents that live in Matrix and work everywhere via bridges.

## What is MindRoom?

MindRoom is an AI agent orchestration system with Matrix integration. It provides:

- **Multi-agent collaboration** - Configure multiple specialized agents that can work together
- **Matrix-native** - Agents live in Matrix rooms and respond to messages
- **Persistent memory** - Agent, room, and team-scoped memory that persists across conversations
- **100+ tool integrations** - Connect to external services like GitHub, Slack, Gmail, and more
- **Hot-reload configuration** - Update `config.yaml` and agents restart automatically
- **Scheduled tasks** - Schedule agents to run at specific times with cron expressions or natural language
- **Voice messages** - Speech-to-text transcription with intelligent command recognition
- **Image analysis** - Pass images to vision-capable AI models for analysis
- **Authorization** - Fine-grained access control for users and rooms

> [!TIP] **Matrix is the backbone** - MindRoom agents communicate through the Matrix protocol, which means they can be bridged to Discord, Slack, Telegram, and other platforms.

## Quick Start

### Recommended: Full Stack Docker Compose (bundled dashboard + Matrix + Element)

**Prereqs:** Docker + Docker Compose.

```
git clone https://github.com/mindroom-ai/mindroom-stack
cd mindroom-stack
cp .env.example .env
$EDITOR .env  # add at least one AI provider key

docker compose up -d
```

Open:

- MindRoom UI: http://localhost:8765
- Element: http://localhost:8080
- Matrix homeserver: http://matrix.localhost:8008

### Manual Install (advanced)

Use this if you already have a Matrix homeserver and want to run MindRoom directly.

```
# Using uv
uv tool install mindroom

# Or using pip
pip install mindroom
```

### Basic Usage (manual)

1. Create a `config.yaml`:

```
agents:
  assistant:
    display_name: Assistant
    role: A helpful AI assistant
    model: default
    rooms: [lobby]

models:
  default:
    provider: openai
    id: gpt-5.2

defaults:
  tools: [scheduler]
  markdown: true
```

1. Set up your environment in `.env`:

```
# Matrix homeserver (must allow open registration)
MATRIX_HOMESERVER=https://matrix.example.com

# AI provider API keys
OPENAI_API_KEY=your_api_key
```

1. Run MindRoom:

```
mindroom run
```

For local development with a host-installed backend plus Dockerized Synapse + Cinny (Linux/macOS), you can bootstrap the local stack with:

```
mindroom local-stack-setup --synapse-dir /path/to/mindroom-stack/local/matrix
mindroom run
```

## Features

| Feature                      | Description                                                       |
| ---------------------------- | ----------------------------------------------------------------- |
| **Agents**                   | Single-specialty actors with specific tools and instructions      |
| **Teams**                    | Collaborative bundles of agents (coordinate or collaborate modes) |
| **Router**                   | Built-in traffic director that routes messages to the right agent |
| **Memory**                   | Mem0-inspired memory system with agent, room, and team scopes     |
| **Knowledge Bases**          | File-backed RAG indexing with per-agent base assignment           |
| **Tools**                    | 100+ integrations for external services                           |
| **Skills**                   | OpenClaw-compatible skills system for extended agent capabilities |
| **Scheduling**               | Schedule tasks with cron expressions or natural language          |
| **Voice**                    | Speech-to-text transcription for voice messages                   |
| **Images**                   | Pass user-sent images to vision-capable AI models                 |
| **File & Video Attachments** | Context-scoped file and video handling with attachment IDs        |
| **Cultures**                 | Shared evolving principles across groups of agents                |
| **Authorization**            | Fine-grained user and room access control                         |
| **OpenAI-Compatible API**    | Use agents from LibreChat, Open WebUI, or any OpenAI client       |
| **Hot Reload**               | Config changes are detected and agents restart automatically      |

## Architecture

```
┌─────────────────────────────────────────────────────┐
│                 Matrix Homeserver                    │
└─────────────────────┬───────────────────────────────┘
                      │
┌─────────────────────▼───────────────────────────────┐
│              MultiAgentOrchestrator                  │
│  ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐   │
│  │ Router  │ │ Agent 1 │ │ Agent 2 │ │  Team   │   │
│  └─────────┘ └─────────┘ └─────────┘ └─────────┘   │
└─────────────────────────────────────────────────────┘
```

## Documentation

- [Getting Started](https://docs.mindroom.chat/getting-started/index.md) - Installation and first steps
- [Hosted Matrix Deployment](https://docs.mindroom.chat/deployment/hosted-matrix/index.md) - Run only `uvx mindroom` locally against hosted Matrix
- [Configuration](https://docs.mindroom.chat/configuration/index.md) - All configuration options
- [Cultures](https://docs.mindroom.chat/configuration/cultures/index.md) - Configure shared agent cultures
- [Dashboard](https://docs.mindroom.chat/dashboard/index.md) - Web UI for configuration
- [OpenAI-Compatible API](https://docs.mindroom.chat/openai-api/index.md) - Use agents from any OpenAI-compatible client
- [Tools](https://docs.mindroom.chat/tools/index.md) - Available tool integrations
- [OpenClaw Import](https://docs.mindroom.chat/openclaw/index.md) - Reuse OpenClaw workspace files in MindRoom
- [MCP (Planned)](https://docs.mindroom.chat/tools/mcp/index.md) - Native MCP status and current plugin workaround
- [Skills](https://docs.mindroom.chat/skills/index.md) - OpenClaw-compatible skills system
- [Plugins](https://docs.mindroom.chat/plugins/index.md) - Extend with custom tools and skills
- [Knowledge Bases](https://docs.mindroom.chat/knowledge/index.md) - Configure RAG-backed document indexing
- [Memory System](https://docs.mindroom.chat/memory/index.md) - How agent memory works
- [Scheduling](https://docs.mindroom.chat/scheduling/index.md) - Schedule tasks with cron or natural language
- [Voice Messages](https://docs.mindroom.chat/voice/index.md) - Voice message transcription
- [Image Messages](https://docs.mindroom.chat/images/index.md) - Image analysis with vision models
- [File & Video Attachments](https://docs.mindroom.chat/attachments/index.md) - Context-scoped file and video handling
- [Authorization](https://docs.mindroom.chat/authorization/index.md) - User and room access control
- [Architecture](https://docs.mindroom.chat/architecture/index.md) - How it works under the hood
- [Deployment](https://docs.mindroom.chat/deployment/index.md) - Docker and Kubernetes deployment
- [Bridges](https://docs.mindroom.chat/deployment/bridges/index.md) - Connect Telegram, Slack, and other platforms to Matrix
- [Sandbox Proxy](https://docs.mindroom.chat/deployment/sandbox-proxy/index.md) - Isolate code-execution tools in a sandbox
- [Google Services OAuth](https://docs.mindroom.chat/deployment/google-services-oauth/index.md) - Admin OAuth setup for Gmail/Calendar/Drive/Sheets
- [Google Services OAuth (Individual)](https://docs.mindroom.chat/deployment/google-services-user-oauth/index.md) - Single-user OAuth setup
- [CLI Reference](https://docs.mindroom.chat/cli/index.md) - Command-line interface
- [Support](https://docs.mindroom.chat/support/index.md) - Contact and troubleshooting help
- [Privacy Policy](https://docs.mindroom.chat/privacy/index.md) - Privacy and data handling information
- [Terms of Service](https://docs.mindroom.chat/terms/index.md) - Terms for using MindRoom services and clients

## License

- **Repository (except `saas-platform/`)**: Apache License 2.0
- **SaaS Platform** (`saas-platform/`): Business Source License 1.1 (converts to Apache 2.0 on 2030-02-06)

# Getting Started

This guide will help you set up MindRoom and create your first AI agent.

## Recommended: Hosted Matrix + Local MindRoom (`uv` only)

If you do not want to self-host Matrix yet, this is the simplest setup. You only run MindRoom locally.

**Prerequisite:** Install [uv](https://docs.astral.sh/uv/getting-started/installation/).

### 1. Initialize local config

```
uvx mindroom config init --profile public
```

This creates:

- `~/.mindroom/config.yaml`
- `~/.mindroom/.env` prefilled with `MATRIX_HOMESERVER=https://mindroom.chat`

### 2. Add model API key(s)

```
$EDITOR ~/.mindroom/.env
```

Set at least one key:

- `OPENAI_API_KEY=...`, or
- `OPENROUTER_API_KEY=...`, or
- another supported provider key.

### 3. Pair your local install from chat UI

1. Open `https://chat.mindroom.chat` and sign in.
1. Go to `Settings -> Local MindRoom`.
1. Click `Generate Pair Code`.
1. Run locally:

```
uvx mindroom connect --pair-code ABCD-EFGH
```

Notes:

- Pair code is short-lived (10 minutes).
- `mindroom connect` writes local provisioning values (including `MINDROOM_NAMESPACE`) into `~/.mindroom/.env` by default.

### 4. Run MindRoom

```
uvx mindroom run
```

### 5. Verify in chat

Send a message mentioning your agent in a room where it is configured.

For a detailed architecture and credential model, see: [Hosted Matrix deployment guide](https://docs.mindroom.chat/deployment/hosted-matrix/index.md).

## Alternative: Full Stack Docker Compose (bundled dashboard + Matrix + Element)

Use this when you want everything local: the bundled MindRoom dashboard, Matrix homeserver, and a Matrix client in one stack.

**Prereqs:** Docker + Docker Compose.

### 1. Clone the full stack repo

```
git clone https://github.com/mindroom-ai/mindroom-stack
cd mindroom-stack
```

### 2. Add your API keys

```
cp .env.example .env
$EDITOR .env  # add at least one AI provider key
```

### 3. Start everything

```
docker compose up -d
```

Open:

- MindRoom UI: http://localhost:8765
- Element: http://localhost:8080
- Matrix homeserver: http://matrix.localhost:8008

## Manual Install (advanced)

Use this if you already have a Matrix homeserver and want to run MindRoom directly.

### Prerequisites

- Python 3.12 or higher
- A Matrix homeserver (or use a public one like matrix.org)
- API keys for your preferred AI provider (Anthropic, OpenAI, etc.)

### Installation

=== "uv (recommended)"

````
```bash
uv tool install mindroom
````

```

=== "pip"

```

```bash
pip install mindroom
```

```

=== "From source"

```

```bash
git clone https://github.com/mindroom-ai/mindroom
cd mindroom
uv sync
source .venv/bin/activate
```

```

### Configuration

#### 1. Create your config file

Create a `config.yaml` in your working directory:

```

agents: assistant: display_name: Assistant role: A helpful AI assistant that can answer questions model: default include_default_tools: true rooms: [lobby] # Optional: file-based context (OpenClaw-style) # context_files: [./workspace/SOUL.md, ./workspace/USER.md]

models: default: provider: openai id: gpt-5.2

defaults: tools: [scheduler] markdown: true

timezone: America/Los_Angeles

```

#### 2. Set up environment variables

Create a `.env` file with your credentials:

```

# Matrix homeserver (must allow open registration for agent accounts)

MATRIX_HOMESERVER=https://matrix.example.com

# Optional: For self-signed certificates (development)

# MATRIX_SSL_VERIFY=false

# Optional: For federation setups where server_name differs from homeserver hostname

# MATRIX_SERVER_NAME=example.com

# AI provider API keys

OPENAI_API_KEY=your_openai_key

# OPENROUTER_API_KEY=your_openrouter_key

# ANTHROPIC_API_KEY=your_anthropic_key

# Optional: protect the dashboard API (recommended for non-localhost)

# MINDROOM_API_KEY=your-secret-key

```

#### Optional: Bootstrap local Synapse + Cinny with Docker (Linux/macOS)

If you want a local Matrix + client setup without running the full `mindroom-stack` app,
use the helper command:

```

mindroom local-stack-setup --synapse-dir /path/to/mindroom-stack/local/matrix

```

If you're running from source in this repo, use:

```

uv run mindroom local-stack-setup --synapse-dir /path/to/mindroom-stack/local/matrix

```

This starts Synapse from the `mindroom-stack` compose files, starts a MindRoom Cinny
container, waits for both services to be healthy, and by default writes local Matrix
settings to `.env` next to your active `config.yaml`.

> [!NOTE]
> MindRoom automatically creates Matrix user accounts for each agent. Your Matrix homeserver must allow open registration, or you need to configure it to allow registration from localhost. If registration fails, check your homeserver's registration settings.

#### 3. Run MindRoom

```

mindroom run

````

MindRoom will:

1. Connect to your Matrix homeserver
2. Create Matrix users for each agent
3. Create any rooms that don't exist and join them
4. Start listening for messages

## Next Steps

- Learn about [agent configuration](https://docs.mindroom.chat/configuration/agents/index.md)
- Learn about [OpenClaw workspace import](https://docs.mindroom.chat/openclaw/index.md) if you want file-based memory/context patterns
- Explore [available tools](https://docs.mindroom.chat/tools/index.md)
- Set up [teams for multi-agent collaboration](https://docs.mindroom.chat/configuration/teams/index.md)```
````

# Web Dashboard

MindRoom includes a web dashboard for configuring agents, teams, rooms, and integrations without editing YAML files. Changes are synchronized to `config.yaml` in real-time.

## Accessing the Dashboard

**Standalone Mode:**

```
mindroom run
```

The dashboard will be available at `http://localhost:8765`. When running from a source checkout, MindRoom will build the dashboard assets on first start if Bun is available.

**SaaS Platform:** Access your dashboard at `https://<instance-id>.mindroom.chat`

## Dashboard Tabs

### Dashboard (Overview)

The main dashboard shows system stats and monitoring:

- **Stats cards** - Agents (with status breakdown), rooms, teams, models, and voice status
- **Network graph** - Visual representation of agent-room-team relationships (desktop only)
- **Search and filter** - Filter by agents, rooms, or teams
- **Export Config** - Download configuration as JSON

### Agents

Configure AI agents:

- **Display name** and **Role description**
- **Model** - Select from configured models
- **Memory backend** - Inherit global memory backend or override per agent (`mem0` or `file`)
- **Tools** - Organized into configured tools (green badge) and default tools (no config needed)
- **Instructions** - Custom behavior instructions
- **Rooms** - Where the agent operates
- **Learning** - Enable or disable Agno Learning per agent (enabled by default)
- **Learning mode** - Choose `always` (automatic extraction) or `agentic` (tool-driven)

### Teams

Configure multi-agent collaboration:

- **Display name** and **Team purpose**
- **Collaboration mode** - Coordinate (sequential) or Collaborate (parallel)
- **Team model** - Optional model override
- **Team members** and **Team rooms**

### Rooms

Manage Matrix room configuration:

- **Display name** and **Description**
- **Room model** - Optional model override
- **Agents in room** - Select which agents have access

### External Rooms

View and manage rooms that agents have joined but are not in the configuration:

- **Per-agent view** with room names and IDs
- **Bulk selection** and **Leave rooms** functionality
- **Open in Matrix** - Link to view in your Matrix client

### Models & API Keys

Configure AI model providers:

- **Add/edit models** with provider, model ID, host URL, and advanced settings
- **Provider filter** to show models by provider
- **Test connection** to verify model accessibility
- **Provider API keys** section for configuring credentials

**Runtime-supported providers:** OpenAI, Anthropic, Google Gemini (`google`/`gemini`), Vertex AI Claude (`vertexai_claude`), Ollama, OpenRouter, Groq, DeepSeek, Cerebras

### Memory

Configure global memory defaults:

- **Backend** - Global default backend (`mem0` or `file`)
- **Provider** - Ollama (local), OpenAI, HuggingFace, or Sentence Transformers
- **Model** - Provider-specific embedding models
- **Host URL** - For Ollama provider
- **File backend settings** - Path and file memory tuning options
- **Auto-flush settings** - Background extraction and flush controls for file-backed memory

Per-agent overrides are configured from the **Agents** tab using the **Memory backend** selector.

### Knowledge

Manage file-backed RAG knowledge bases:

- **Create/edit/delete knowledge bases** with `path` and `watch` settings
- **Upload and remove files** per knowledge base
- **Reindex** a knowledge base on demand
- **Track index status** (`file_count` and `indexed_count`)
- **Assign agents** to a specific knowledge base from the Agents tab

Git-backed knowledge bases are supported, but Git settings are currently configured in `config.yaml` (`knowledge_bases.<id>.git`), not via dedicated dashboard controls yet.

- The dashboard preserves existing `git` settings when you edit `path`/`watch`.
- `/api/knowledge/bases/{base_id}/files` reflects the manager's filtered file set (for example `include_patterns`/`exclude_patterns`).
- Private HTTPS repo auth can be managed in the **Credentials** tab, then referenced by `knowledge_bases.<id>.git.credentials_service`.
- In API-only mode, Git-backed bases are cloned/synced/indexed automatically on first manager initialization.
- `POST /api/knowledge/bases/{base_id}/reindex` syncs Git first for Git-backed bases before rebuilding the index.

### Credentials

Manage service credentials directly from the dashboard:

- **List configured credential services** from `CredentialsManager`
- **Create/select service names** (for example `github_private` or `model:sonnet`)
- **Edit raw JSON credential payloads** and save via `/api/credentials/{service}`
- **Test credentials existence** using `/api/credentials/{service}/test`
- **Delete credential sets** using `/api/credentials/{service}`
- **Reuse credentials for Git knowledge sync** by setting `knowledge_bases.<id>.git.credentials_service` to the same service name
- `GITHUB_TOKEN` auto-seeds `github_private` (`username: x-access-token`, `token: <GITHUB_TOKEN>`, `_source: env`) unless the service is UI-managed

### Voice

Configure voice message handling:

- **Enable/disable** voice message support
- **Speech-to-Text** - OpenAI Whisper or self-hosted
- **Command Intelligence** - Model selection for command recognition

### Integrations

Connect external services to enable agent capabilities:

- **Categories** - Email & Calendar, Communication, Shopping, Entertainment, Social, Development, Research, Smart Home, Information
- **Search and filter** by status (Available, Unconfigured, Configured, Coming Soon)
- **OAuth flows** for Google, Spotify, Home Assistant, etc.

## Features

### Real-time Sync

The sync status indicator in the header shows:

- **Synced** - All changes saved
- **Syncing...** - Save in progress
- **Sync Error** - Sync failed
- **Disconnected** - Lost connection to backend

### Theme and Responsive Design

Toggle between dark and light themes. The dashboard adapts to desktop and mobile devices.

## API Endpoints

The dashboard communicates with the backend API at `/api/`:

### Configuration

| Method | Endpoint                  | Description                 |
| ------ | ------------------------- | --------------------------- |
| POST   | `/api/config/load`        | Fetch current configuration |
| PUT    | `/api/config/save`        | Save full configuration     |
| GET    | `/api/config/agents`      | List all agents             |
| POST   | `/api/config/agents`      | Create new agent            |
| PUT    | `/api/config/agents/{id}` | Update agent                |
| DELETE | `/api/config/agents/{id}` | Delete agent                |
| GET    | `/api/config/teams`       | List all teams              |
| POST   | `/api/config/teams`       | Create new team             |
| PUT    | `/api/config/teams/{id}`  | Update team                 |
| DELETE | `/api/config/teams/{id}`  | Delete team                 |
| GET    | `/api/config/models`      | List model configurations   |
| PUT    | `/api/config/models/{id}` | Update model configuration  |
| GET    | `/api/config/room-models` | Get room model overrides    |
| PUT    | `/api/config/room-models` | Update room model overrides |

### Credentials

| Method | Endpoint                             | Description                    |
| ------ | ------------------------------------ | ------------------------------ |
| GET    | `/api/credentials/list`              | List services with credentials |
| GET    | `/api/credentials/{service}/status`  | Get credential status          |
| GET    | `/api/credentials/{service}`         | Get credentials for editing    |
| POST   | `/api/credentials/{service}`         | Set credentials                |
| POST   | `/api/credentials/{service}/api-key` | Set API key                    |
| GET    | `/api/credentials/{service}/api-key` | Get masked API key             |
| POST   | `/api/credentials/{service}/test`    | Test credentials validity      |
| DELETE | `/api/credentials/{service}`         | Delete credentials             |

### Knowledge

| Method | Endpoint                                      | Description                       |
| ------ | --------------------------------------------- | --------------------------------- |
| GET    | `/api/knowledge/bases`                        | List configured knowledge bases   |
| GET    | `/api/knowledge/bases/{base_id}/files`        | List files in a knowledge base    |
| POST   | `/api/knowledge/bases/{base_id}/upload`       | Upload one or more files          |
| DELETE | `/api/knowledge/bases/{base_id}/files/{path}` | Delete a file from disk and index |
| GET    | `/api/knowledge/bases/{base_id}/status`       | Get indexing status               |
| POST   | `/api/knowledge/bases/{base_id}/reindex`      | Rebuild the index for a base      |

### Tools & Matrix

| Method | Endpoint                        | Description                      |
| ------ | ------------------------------- | -------------------------------- |
| GET    | `/api/tools`                    | List available tools             |
| GET    | `/api/rooms`                    | List configured rooms            |
| GET    | `/api/matrix/agents/rooms`      | Get all agents' room memberships |
| GET    | `/api/matrix/agents/{id}/rooms` | Get specific agent's rooms       |
| POST   | `/api/matrix/rooms/leave`       | Leave a single room              |
| POST   | `/api/matrix/rooms/leave-bulk`  | Leave multiple rooms             |

# Configuration

MindRoom is configured through a `config.yaml` file. This section covers all configuration options.

## Configuration File

MindRoom searches for the configuration file in this order (first match wins):

1. `MINDROOM_CONFIG_PATH` environment variable (if set)
1. `./config.yaml` (current working directory)
1. `~/.mindroom/config.yaml` (home directory)

Data storage (`mindroom_data/`) is placed next to the config file by default.

You can also validate a specific file directly:

```
mindroom config validate --path /path/to/config.yaml
```

## Environment Variables

### Core

| Variable                   | Description                                   | Default                                     |
| -------------------------- | --------------------------------------------- | ------------------------------------------- |
| `MINDROOM_CONFIG_PATH`     | Path to `config.yaml`                         | `./config.yaml` → `~/.mindroom/config.yaml` |
| `MINDROOM_STORAGE_PATH`    | Data storage directory                        | `mindroom_data/` next to config             |
| `MINDROOM_CONFIG_TEMPLATE` | Template to seed config from (for containers) | Same as config path                         |

### Matrix

| Variable             | Description                | Default                     |
| -------------------- | -------------------------- | --------------------------- |
| `MATRIX_HOMESERVER`  | Matrix homeserver URL      | `http://localhost:8008`     |
| `MATRIX_SERVER_NAME` | Server name for federation | *(derived from homeserver)* |
| `MATRIX_SSL_VERIFY`  | Verify SSL certificates    | `true`                      |

### API Keys

Set the API key for each provider you use in `config.yaml`:

| Variable             | Provider                     |
| -------------------- | ---------------------------- |
| `ANTHROPIC_API_KEY`  | Anthropic (Claude)           |
| `OPENAI_API_KEY`     | OpenAI                       |
| `GOOGLE_API_KEY`     | Google (Gemini)              |
| `OPENROUTER_API_KEY` | OpenRouter                   |
| `DEEPSEEK_API_KEY`   | DeepSeek                     |
| `CEREBRAS_API_KEY`   | Cerebras                     |
| `GROQ_API_KEY`       | Groq                         |
| `OLLAMA_HOST`        | Ollama (host URL, not a key) |

### Sandbox Proxy

See [Sandbox Proxy](https://docs.mindroom.chat/deployment/sandbox-proxy/index.md) for the full list of `MINDROOM_SANDBOX_*` variables.

## Basic Structure

```
# Agent definitions (at least one recommended)
agents:
  assistant:
    display_name: Assistant        # Required: Human-readable name
    role: A helpful AI assistant   # Optional: Description of purpose
    model: sonnet                  # Optional: Model name (default: "default")
    tools: [file, shell]           # Optional: Agent-specific tools (merged with defaults.tools)
    include_default_tools: true    # Optional: Per-agent opt-out for defaults.tools
    skills: []                     # Optional: List of skill names
    instructions: []               # Optional: Custom instructions
    rooms: [lobby]                 # Optional: Rooms to auto-join
    markdown: true                 # Optional: Override default (inherits from defaults section)
    worker_tools: [shell, file]    # Optional: Override default (inherits from defaults section)
    worker_scope: user_agent       # Optional: Scope proxied tool state per requester+agent
    learning: true                 # Optional: Override default (inherits from defaults section)
    learning_mode: always          # Optional: Override default (inherits from defaults section)
    memory_backend: file           # Optional: Per-agent memory backend override (mem0 or file)
    memory_file_path: ./openclaw_data  # Optional: Per-agent file-memory scope directory (relative to config.yaml)
    knowledge_bases: [docs]         # Optional: Assign one or more configured knowledge bases
    context_files:                 # Optional: Load files into role context at init/reload
      - ./openclaw_data/SOUL.md
      - ./openclaw_data/AGENTS.md
      - ./openclaw_data/USER.md
      - ./openclaw_data/IDENTITY.md
      - ./openclaw_data/MEMORY.md
      - ./openclaw_data/TOOLS.md
      - ./openclaw_data/HEARTBEAT.md
  researcher:
    display_name: Researcher
    role: Research and gather information
    model: sonnet
  writer:
    display_name: Writer
    role: Write and edit content
    model: sonnet
  developer:
    display_name: Developer
    role: Write code and implement features
    model: sonnet
  reviewer:
    display_name: Reviewer
    role: Review code and provide feedback
    model: sonnet

# Model configurations (at least a "default" model is recommended)
models:
  default:
    provider: anthropic            # Required: openai, anthropic, ollama, google, gemini, vertexai_claude, groq, cerebras, openrouter, deepseek
    id: claude-sonnet-4-5-latest     # Required: Model ID for the provider
  sonnet:
    provider: anthropic            # Required: openai, anthropic, ollama, google, gemini, vertexai_claude, groq, cerebras, openrouter, deepseek
    id: claude-sonnet-4-5-latest     # Required: Model ID for the provider
    host: null                     # Optional: Host URL (e.g., for Ollama)
    api_key: null                  # Optional: API key (usually from env vars)
    extra_kwargs: null             # Optional: Provider-specific parameters

# Team configurations (optional)
teams:
  research_team:
    display_name: Research Team    # Required: Human-readable name
    role: Collaborative research   # Required: Description of team purpose
    agents: [researcher, writer]   # Required: List of agent names
    mode: collaborate              # Optional: "coordinate" or "collaborate" (default: coordinate)
    model: sonnet                  # Optional: Model for team coordination (default: "default")
    rooms: []                      # Optional: Rooms to auto-join

# Culture configurations (optional)
cultures:
  engineering:
    description: Follow clean code principles and write tests  # Shared principles
    agents: [developer, reviewer]  # Agents assigned (each agent can belong to at most one culture)
    mode: automatic                # automatic, agentic, or manual

# Router configuration (optional)
router:
  model: default                   # Optional: Model for routing (default: "default")

# Default settings for all agents (optional)
defaults:
  tools: [scheduler]               # Default: ["scheduler"] (added to every agent; set [] to disable)
  markdown: true                   # Default: true
  enable_streaming: true           # Default: true (stream responses via message edits)
  learning: true                   # Default: true
  learning_mode: always            # Default: always (or agentic)
  max_preload_chars: 50000         # Hard cap for preloaded context from context_files
  show_stop_button: true           # Default: true (global only, cannot be overridden per-agent)
  num_history_runs: null           # Number of prior runs to include (null = all)
  num_history_messages: null       # Max messages from history (null = use num_history_runs)
  compress_tool_results: true      # Compress tool results in history to save context
  enable_session_summaries: false  # AI summaries of older conversation segments (costs extra LLM call)
  max_tool_calls_from_history: null  # Limit tool call messages replayed from history (null = no limit)
  show_tool_calls: true            # Default: true (show tool call details inline in responses)
  worker_tools: null               # Default: null (tool names to route through workers; null = use MindRoom's default routing policy, [] = disable)
  worker_scope: null               # Default: null (shared runtime state unless an agent opts into worker isolation)

# defaults.tools are appended to each agent's tools list with duplicates removed.
# Set agents.<name>.include_default_tools: false to opt out a specific agent.

# Memory system configuration (optional)
memory:
  backend: mem0                    # Global default backend (mem0 or file); agents can override with memory_backend
  embedder:
    provider: openai               # Default: openai
    config:
      model: text-embedding-3-small  # Default embedding model
      api_key: null                # Optional: From env var
      host: null                   # Optional: For self-hosted
  llm:                             # Optional: LLM for memory operations
    provider: ollama
    config: {}

# Knowledge base configuration (optional)
knowledge_bases:
  docs:
    path: ./knowledge_docs/default # Folder containing documents for this base
    watch: true                    # Reindex automatically when files change
    git:                           # Optional: Sync this folder from a Git repository
      repo_url: https://github.com/pipefunc/pipefunc
      branch: main
      poll_interval_seconds: 300
      skip_hidden: true
      include_patterns: ["docs/**"]  # Optional: root-anchored glob filters
      exclude_patterns: []
      credentials_service: github_private # Optional: service in CredentialsManager

# Voice message handling (optional)
voice:
  enabled: false                   # Default: false
  visible_router_echo: false       # Optional: show the normalized voice text from the router
  stt:
    provider: openai               # Default: openai
    model: whisper-1               # Default: whisper-1
    api_key: null
    host: null
  intelligence:
    model: default                 # Model for command recognition

# Internal MindRoom user account (optional)
mindroom_user:
  username: mindroom_user          # Set before first startup (localpart only)
  display_name: MindRoomUser       # Can be changed later

# Matrix room onboarding/discoverability (optional)
matrix_room_access:
  mode: single_user_private        # Default keeps invite-only/private behavior
  multi_user_join_rule: public     # In multi_user mode: public or knock
  publish_to_room_directory: false # Publish managed rooms in server room directory
  invite_only_rooms: []            # Room keys/aliases/IDs that stay invite-only/private
  reconcile_existing_rooms: false  # Explicit migration of existing managed rooms

# Authorization (optional)
authorization:
  global_users: []                 # Users with access to all rooms
  room_permissions: {}             # Keys: room ID (!id), full alias (#alias:domain), or managed room key (alias)
  default_room_access: false       # Default: false
  agent_reply_permissions: {}      # Per-agent/team/router (or '*') reply allowlists; supports globs like '*:example.com'

# Room-specific model overrides (optional)
# Keys are room aliases, values are model names from the models section
# Example: room_models: {dev: sonnet, lobby: gpt4o}
room_models: {}

# Non-MindRoom bot accounts to exclude from multi-human detection (optional)
# These accounts won't trigger the mention requirement in threads
bot_accounts:
  - "@telegram:example.com"

# Plugin paths (optional)
plugins: []

# Timezone for scheduled tasks (optional)
timezone: America/Los_Angeles      # Default: UTC
```

## Internal User Username

- Configure `mindroom_user.username` with the Matrix localpart you want before first startup.
- After the account is created, `mindroom_user.username` is locked and cannot be changed in-place.
- You can safely change `mindroom_user.display_name` at any time.

## Sections

- [Agents](https://docs.mindroom.chat/configuration/agents/index.md) - Configure individual AI agents
- [Models](https://docs.mindroom.chat/configuration/models/index.md) - Configure AI model providers
- [Teams](https://docs.mindroom.chat/configuration/teams/index.md) - Configure multi-agent collaboration
- [Cultures](https://docs.mindroom.chat/configuration/cultures/index.md) - Configure shared agent cultures
- [Router](https://docs.mindroom.chat/configuration/router/index.md) - Configure message routing
- [Memory](https://docs.mindroom.chat/memory/index.md) - Configure memory providers and behavior
- [Knowledge Bases](https://docs.mindroom.chat/knowledge/index.md) - Configure file-backed knowledge bases
- [Voice](https://docs.mindroom.chat/voice/index.md) - Configure speech-to-text voice processing
- [Authorization](https://docs.mindroom.chat/authorization/index.md) - Configure user and room access control
- [Skills](https://docs.mindroom.chat/skills/index.md) - Skill format, gating, and allowlists
- [Plugins](https://docs.mindroom.chat/plugins/index.md) - Plugin manifest and tool/skill loading

## Notes

- All top-level sections are optional with sensible defaults, but at least one agent is recommended for Matrix interactions
- A model named `default` is required unless agents, teams, and the router all specify explicit non-`default` models
- Agents can set `knowledge_bases`, but each entry must exist in the top-level `knowledge_bases` section
- `agents.<name>.context_files` inject file-based context at agent creation/reload (see [Agents](https://docs.mindroom.chat/configuration/agents/index.md))
- `agents.<name>.room_thread_modes` overrides `thread_mode` for specific rooms, and resolution is room-aware for agents, teams, and router decisions (see [Agents](https://docs.mindroom.chat/configuration/agents/index.md))
- `memory.backend` sets the global memory default, `agents.<name>.memory_backend` overrides it per agent, and `agents.<name>.memory_file_path` sets a custom file-memory scope for that agent
- `defaults.max_preload_chars` caps preloaded file context (`context_files`)
- When `authorization.default_room_access` is `false`, only users in `global_users` or room-specific `room_permissions` can interact with agents
- `authorization.agent_reply_permissions` can further restrict which users specific agents/teams/router will reply to
- `authorization.room_permissions` accepts room IDs, full room aliases, and managed room keys
- `matrix_room_access.mode` defaults to `single_user_private`; this preserves current private/invite-only behavior
- In `multi_user` mode, MindRoom sets managed room join rules and directory visibility from config
- Publishing to the room directory requires the managing service account (typically router) to have moderator/admin power in each room
- The `memory` system works out of the box with OpenAI; use `memory.llm` for memory summarization with a different provider

# Agent Configuration

Agents are the core building blocks of MindRoom. Each agent is a specialized AI actor with specific capabilities.

## Basic Agent

```
agents:
  assistant:
    display_name: Assistant
    role: A helpful AI assistant
    model: sonnet
    rooms: [lobby]
```

## Full Configuration

```
agents:
  developer:
    # Display name shown in Matrix
    display_name: Developer

    # Role description - guides the agent's behavior
    role: Generate code, manage files, execute shell commands

    # Model to use (defined in models section)
    model: sonnet

    # Tools the agent can use
    tools:
      - file
      - shell
      - github

    # Skills the agent can use (defined in skills section or plugins)
    skills:
      - my_custom_skill

    # Custom instructions
    instructions:
      - Always read files before modifying them
      - Use clear variable names
      - Add comments for complex logic

    # Rooms to join (will be created if they don't exist)
    rooms:
      - lobby
      - dev

    # Enable markdown formatting
    markdown: true

    # Enable Agno Learning for this agent
    learning: true

    # Learning mode: always (automatic) or agentic (tool-driven)
    learning_mode: always

    # Memory backend override for this agent (optional: mem0 or file)
    memory_backend: file

    # Custom file-memory scope directory (optional, overrides default <root>/agent_<name>/)
    memory_file_path: ./openclaw_data

    # Assign agent to one or more configured knowledge bases (optional)
    knowledge_bases: [docs]

    # Optional: additional files loaded into role context at agent init/reload
    context_files:
      - ./openclaw_data/SOUL.md
      - ./openclaw_data/AGENTS.md
      - ./openclaw_data/USER.md
      - ./openclaw_data/IDENTITY.md
      - ./openclaw_data/MEMORY.md
      - ./openclaw_data/TOOLS.md
      - ./openclaw_data/HEARTBEAT.md

    # Whether to include defaults.tools for this agent (default: true)
    include_default_tools: true

    # Response mode: "thread" (replies in Matrix threads) or "room" (plain room messages)
    thread_mode: thread

    # Optional room-specific overrides for thread mode
    # Keys may be managed room aliases/names or Matrix room IDs
    room_thread_modes:
      lobby: thread
      bridge_telegram: room
      "!abc123:example.com": room

    # Tools to route through scoped workers via the sandbox proxy (optional, inherits from defaults)
    worker_tools: [shell, file]

    # How proxied tool state is shared (optional, inherits from defaults)
    worker_scope: user_agent

    # Allow this agent to read and modify its own config at runtime
    allow_self_config: false

    # Delegate tasks to other agents via tool calls
    delegate_to:
      - research
      - finance

    # History context controls (all optional, inherit from defaults)
    num_history_runs: null
    num_history_messages: null
    compress_tool_results: true
    enable_session_summaries: false
    max_tool_calls_from_history: null
```

## Configuration Options

| Option                        | Type   | Default     | Description                                                                                                                                                                                                                                                                                                                       |
| ----------------------------- | ------ | ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `display_name`                | string | *required*  | Human-readable name shown in Matrix as the bot's display name                                                                                                                                                                                                                                                                     |
| `role`                        | string | `""`        | System prompt describing the agent's purpose — guides its behavior and expertise                                                                                                                                                                                                                                                  |
| `model`                       | string | `"default"` | Model name (must match a key in the `models` section)                                                                                                                                                                                                                                                                             |
| `tools`                       | list   | `[]`        | Agent-specific tool names (see [Tools](https://docs.mindroom.chat/tools/index.md)); effective tools are `tools + defaults.tools` with duplicates removed                                                                                                                                                                          |
| `include_default_tools`       | bool   | `true`      | When `true`, append `defaults.tools` to this agent's `tools`; set to `false` to opt this agent out                                                                                                                                                                                                                                |
| `skills`                      | list   | `[]`        | Skill names the agent can use (see [Skills](https://docs.mindroom.chat/skills/index.md))                                                                                                                                                                                                                                          |
| `instructions`                | list   | `[]`        | Extra lines appended to the system prompt after the role                                                                                                                                                                                                                                                                          |
| `rooms`                       | list   | `[]`        | Room aliases to auto-join; rooms are created if they don't exist                                                                                                                                                                                                                                                                  |
| `markdown`                    | bool   | `null`      | When enabled, the agent is instructed to format responses as Markdown. Inherits from `defaults.markdown` (default: `true`)                                                                                                                                                                                                        |
| `learning`                    | bool   | `null`      | Enable [Agno Learning](https://docs.agno.com/agents/learning) — the agent builds a persistent profile of user preferences and adapts over time. Inherits from `defaults.learning` (default: `true`)                                                                                                                               |
| `learning_mode`               | string | `null`      | `always`: agent automatically learns from every interaction. `agentic`: agent decides when to learn via a tool call. Inherits from `defaults.learning_mode` (default: `"always"`)                                                                                                                                                 |
| `memory_backend`              | string | `null`      | Memory backend override for this agent (`"mem0"` or `"file"`). Inherits from global `memory.backend` when omitted                                                                                                                                                                                                                 |
| `memory_file_path`            | string | `null`      | Custom directory to use as the file-memory scope for this agent instead of the default `<root>/agent_<name>/`. Useful for pointing an agent at an existing workspace (e.g. an OpenClaw workspace). Resolved relative to the config file directory                                                                                 |
| `knowledge_bases`             | list   | `[]`        | Knowledge base IDs from top-level `knowledge_bases` — gives the agent RAG access to the indexed documents                                                                                                                                                                                                                         |
| `context_files`               | list   | `[]`        | File paths loaded at agent init/reload and prepended to role context (under `Personality Context`)                                                                                                                                                                                                                                |
| `thread_mode`                 | string | `"thread"`  | `thread`: responses are sent in Matrix threads (default). `room`: responses are sent as plain room messages with a single persistent session per room — ideal for bridges (Telegram, Signal, WhatsApp) and mobile                                                                                                                 |
| `room_thread_modes`           | map    | `{}`        | Per-room thread mode overrides keyed by room alias/name or Matrix room ID. Values are `thread` or `room`. Overrides apply before `thread_mode` fallback                                                                                                                                                                           |
| `num_history_runs`            | int    | `null`      | Number of prior Agno runs to include as history context (`null` = all). Mutually exclusive with `num_history_messages`                                                                                                                                                                                                            |
| `num_history_messages`        | int    | `null`      | Max messages from history. Mutually exclusive with `num_history_runs`                                                                                                                                                                                                                                                             |
| `compress_tool_results`       | bool   | `null`      | Compress tool results in history to save context. Inherits from `defaults.compress_tool_results` (default: `true`)                                                                                                                                                                                                                |
| `enable_session_summaries`    | bool   | `null`      | Generate AI summaries of older conversation segments for compaction (each summary costs an extra LLM call). Inherits from `defaults.enable_session_summaries` (default: `false`)                                                                                                                                                  |
| `max_tool_calls_from_history` | int    | `null`      | Limit tool call messages replayed from history (`null` = no limit)                                                                                                                                                                                                                                                                |
| `show_tool_calls`             | bool   | `null`      | Show tool-call markers and trace metadata in Matrix messages. Inherits from `defaults.show_tool_calls` (default: `true`). When `false`, inline markers and `io.mindroom.tool_trace` are omitted from sent Matrix message content. Note: this flag is not currently enforced by the OpenAI-compatible `/v1/chat/completions` path. |
| `worker_tools`                | list   | `null`      | Tool names to route through the [sandbox proxy](https://docs.mindroom.chat/deployment/sandbox-proxy/index.md). Inherits from `defaults.worker_tools`. When omitted everywhere, MindRoom applies its built-in default routing policy. Set to `[]` to explicitly disable proxy routing for this agent                               |
| `worker_scope`                | string | `null`      | Worker-state sharing mode for proxied tools. Inherits from `defaults.worker_scope`. Valid values are `shared`, `user`, `user_agent`, and `room_thread`                                                                                                                                                                            |
| `allow_self_config`           | bool   | `null`      | Give this agent a scoped tool to read and modify its own configuration at runtime. Inherits from `defaults.allow_self_config` (default: `false`). Lighter-weight alternative to the `config_manager` tool                                                                                                                         |
| `delegate_to`                 | list   | `[]`        | Agent names this agent can delegate tasks to via tool calls (see [Agent Delegation](#agent-delegation))                                                                                                                                                                                                                           |

Each entry in `knowledge_bases` must match a key under `knowledge_bases` in `config.yaml`.

Per-agent fields with a `null` default inherit from the `defaults` section at runtime. Per-agent values override them. `memory.backend` is the global memory default, and `agents.<name>.memory_backend` overrides it per agent. `show_stop_button` and `enable_streaming` are global-only settings in `defaults` and cannot be overridden per-agent. The dashboard Agents tab exposes this as the **Memory Backend** selector for each agent.

Learning data is persisted to `mindroom_data/learning/<agent>.db`, so it survives container restarts when the storage directory is mounted.

## Worker Routing

`worker_tools` decides which toolkits are executed through the sandbox proxy instead of directly in the main MindRoom process. When `worker_tools` is omitted, MindRoom currently routes `coding`, `file`, `python`, and `shell` by default and keeps other tools local. `worker_scope` decides which proxied calls share the same worker-owned state directory. Some credential-backed custom tools stay local even if they are listed in `worker_tools`. Currently that local-only set is `gmail`, `google_calendar`, `google_sheets`, and `homeassistant`.

The supported `worker_scope` values are:

- `shared`: one shared worker state per agent.
- `user`: one worker state per requester.
- `user_agent`: one worker state per requester and agent.
- `room_thread`: one worker state per room thread, or per room when no thread ID exists.

Leave `worker_scope` unset to keep proxied calls unscoped. They still run in the sandbox runner, but they do not get a worker-specific storage root. `worker_scope` primarily affects proxied tool execution, and it also affects dashboard credential support and OpenAI-compatible agent eligibility.

The dashboard credential UI can only manage credentials for unscoped agents and agents with `worker_scope=shared`. Agents using `user`, `user_agent`, or `room_thread` treat credentials as runtime-owned worker state instead of dashboard-managed state.

## Thread Mode Resolution

Thread mode is resolved per message using the current room ID. For an agent, MindRoom checks `room_thread_modes` in this order. First, it checks an exact room ID key. Second, it checks the managed room key/alias associated with that room ID. Third, it resolves each configured `room_thread_modes` key to a room ID and matches that against the current room. If none match, it falls back to `thread_mode`.

For a team, MindRoom resolves mode per member agent for that room. If all member agents resolve to the same mode, the team uses that mode. If member modes differ, the team defaults to `thread`.

For the router, MindRoom resolves mode using agents relevant to the active room. This includes agents directly configured for the room and agents included via `teams.<name>.rooms`. If all relevant agents resolve to the same mode, the router uses that mode. If modes are mixed, the router defaults to `thread`.

## File-Based Context Loading

You can inject file content directly into an agent's role context without using a knowledge base.

`context_files` behavior:

- Paths are resolved relative to the config file directory
- Existing files are loaded in list order and added under `Personality Context`
- Missing files are skipped with a warning in logs

This loading happens when the agent is created (and on config reload), not continuously on every message.

## Agent Delegation

Agents can delegate tasks to other agents using the `delegate_to` field. When configured, a delegation tool is automatically added to the agent — no need to include `"delegate"` in the `tools` list.

The delegated agent runs as a fresh, one-shot instance with no shared session or history. It executes the task and returns its response as the tool result.

```
agents:
  leader:
    display_name: Leader
    role: Orchestrate tasks by delegating to specialist agents
    model: sonnet
    delegate_to: [code, research]
    rooms: [lobby]

  code:
    display_name: CodeAgent
    role: Generate code, manage files
    model: sonnet
    tools: [file, shell]
    delegate_to: [research]  # can further delegate
    rooms: [lobby]

  research:
    display_name: ResearchAgent
    role: Research topics and provide summaries
    model: sonnet
    tools: [duckduckgo]
    rooms: [lobby]
```

**Constraints:**

- Targets must reference existing agent names in the config
- An agent cannot delegate to itself
- Recursive delegation is supported (agent A delegates to B, B delegates to C) up to a maximum depth of 3

## Rich Prompt Agents

Certain agent names (the YAML key, not `display_name`) have built-in rich prompts:

`code`, `research`, `calculator`, `general`, `shell`, `summary`, `finance`, `news`, `data_analyst`

When using these names, the built-in prompt replaces the `role` field and any custom `instructions` are ignored.

## Defaults

The `defaults` section sets fallback values for all agents. Any agent that omits a setting inherits the value from here.

```
defaults:
  tools: [scheduler]                   # Tools added to every agent by default (set [] to disable)
  markdown: true                        # Format responses as Markdown
  learning: true                        # Enable Agno Learning
  learning_mode: always                 # "always" or "agentic"
  max_preload_chars: 50000              # Hard cap for preloaded context from context_files
  show_stop_button: true                # Show a stop button while agent is responding (global-only, cannot be overridden per-agent)
  num_history_runs: null                # Number of prior runs to include (null = all)
  num_history_messages: null            # Max messages from history (null = use num_history_runs)
  enable_streaming: true                # Stream agent responses via progressive message edits
  compress_tool_results: true           # Compress tool results in history to save context
  enable_session_summaries: false       # AI summaries of older conversation segments (costs extra LLM call)
  max_tool_calls_from_history: null     # Limit tool call messages replayed from history (null = no limit)
  show_tool_calls: true                 # Show tool-call markers and trace metadata in message content
  worker_tools: null                     # Tool names to route through workers (null = use MindRoom's default routing policy, [] = disable)
  worker_scope: null                     # Worker state scope for proxied tools (shared, user, user_agent, room_thread)
  allow_self_config: false               # Allow agents to read/modify their own config at runtime
```

To opt out a specific agent:

```
agents:
  researcher:
    display_name: Researcher
    role: Focus on deep research
    include_default_tools: false
    tools: [web_search]
```

# Model Configuration

Models define the AI providers and model IDs used by agents.

## Supported Providers

- `anthropic` - Claude models (Anthropic)
- `openai` - GPT models and OpenAI-compatible endpoints
- `google` or `gemini` - Google Gemini models
- `vertexai_claude` - Anthropic Claude models on Google Vertex AI
- `ollama` - Local models via Ollama
- `groq` - Groq-hosted models (fast inference)
- `openrouter` - OpenRouter-hosted models (access to many providers)
- `cerebras` - Cerebras-hosted models
- `deepseek` - DeepSeek models

## Model Config Fields

Each model configuration supports the following fields:

| Field            | Required | Default | Description                                                                                              |
| ---------------- | -------- | ------- | -------------------------------------------------------------------------------------------------------- |
| `provider`       | Yes      | -       | The AI provider (see supported providers above)                                                          |
| `id`             | Yes      | -       | Model ID specific to the provider                                                                        |
| `host`           | No       | `null`  | Host URL for self-hosted models (e.g., Ollama)                                                           |
| `api_key`        | No       | `null`  | API key (usually read from environment variables)                                                        |
| `extra_kwargs`   | No       | `null`  | Additional provider-specific parameters                                                                  |
| `context_window` | No       | `null`  | Context window size in tokens; when set, history is dynamically trimmed to stay within 80% of this limit |

## Configuration Examples

```
models:
  # Anthropic Claude
  sonnet:
    provider: anthropic
    id: claude-sonnet-4-5-latest
    context_window: 200000

  haiku:
    provider: anthropic
    id: claude-haiku-4-5-latest
    context_window: 200000

  # OpenAI
  gpt:
    provider: openai
    id: gpt-5.2

  # Google Gemini (both 'google' and 'gemini' work as provider names)
  gemini:
    provider: google
    id: gemini-2.0-flash

  # Anthropic Claude on Vertex AI
  vertex_claude:
    provider: vertexai_claude
    id: claude-sonnet-4@20250514
    extra_kwargs:
      project_id: your-gcp-project
      region: us-central1

  # Local via Ollama
  local:
    provider: ollama
    id: llama3.2
    host: http://localhost:11434  # Uses dedicated host field

  # OpenRouter (access to many model providers)
  openrouter:
    provider: openrouter
    id: anthropic/claude-3-opus

  # Groq (fast inference)
  groq:
    provider: groq
    id: llama-3.1-70b-versatile

  # Cerebras
  cerebras:
    provider: cerebras
    id: llama3.1-8b

  # DeepSeek
  deepseek:
    provider: deepseek
    id: deepseek-chat

  # Custom OpenAI-compatible endpoint (e.g., vLLM, llama.cpp server)
  custom:
    provider: openai
    id: my-model
    extra_kwargs:
      base_url: http://localhost:8080/v1
```

## Context Window

When `context_window` is set, MindRoom estimates the total context size before each model call (system prompt + conversation history + current message) using a chars/4 token approximation. If the estimate exceeds 80% of the context window, the number of history runs replayed is automatically reduced to fit within budget. If even a single history run exceeds the remaining budget, history is disabled entirely for that call.

A warning is logged whenever history is trimmed, including the original and reduced run counts.

```
models:
  default:
    provider: anthropic
    id: claude-sonnet-4-5-latest
    context_window: 200000  # 200K tokens
```

This is useful for models with smaller context windows or agents with long-running conversations that accumulate large histories.

## Extra Kwargs

The `extra_kwargs` field passes additional parameters directly to the underlying [Agno](https://docs.agno.com/) model class. Common options include:

- `base_url` - Custom API endpoint (useful for OpenAI-compatible servers)
- `temperature` - Sampling temperature
- `max_tokens` - Maximum tokens in response

## Environment Variables

API keys are read from environment variables:

```
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
GOOGLE_API_KEY=...
GROQ_API_KEY=...
OPENROUTER_API_KEY=...
CEREBRAS_API_KEY=...
DEEPSEEK_API_KEY=...
```

For Ollama, you can also set:

```
OLLAMA_HOST=http://localhost:11434
```

### File-based Secrets

For container environments (Kubernetes, Docker Swarm), you can also use file-based secrets by appending `_FILE` to any environment variable name:

```
# Instead of setting the key directly:
ANTHROPIC_API_KEY=sk-ant-...

# Point to a file containing the key:
ANTHROPIC_API_KEY_FILE=/run/secrets/anthropic-api-key
```

This works for all API key environment variables (e.g., `OPENAI_API_KEY_FILE`, `GOOGLE_API_KEY_FILE`, etc.).

# Team Configuration

Teams allow multiple agents to collaborate on tasks. MindRoom supports two collaboration modes.

## Team Modes

### Coordinate Mode

The team coordinator analyzes the task and delegates different subtasks to specific team members:

```
teams:
  dev_team:
    display_name: Dev Team
    role: Development team for building features
    agents: [architect, coder, reviewer]
    mode: coordinate
```

In coordinate mode, the coordinator analyzes the task and selects which agents should handle which subtasks based on their roles. The coordinator decides whether to run tasks sequentially or in parallel based on dependencies, then synthesizes all outputs into a cohesive response.

### Collaborate Mode

All agents work on the same task simultaneously and their outputs are synthesized:

```
teams:
  research_team:
    display_name: Research Team
    role: Research team for comprehensive analysis
    agents: [researcher, analyst, writer]
    mode: collaborate
```

In collaborate mode, the task is delegated to all team members simultaneously. Each agent works on the same task independently, and the coordinator synthesizes all perspectives into a final response. This is useful when you want diverse perspectives on the same problem.

## Full Configuration

```
teams:
  super_team:
    # Display name shown in Matrix
    display_name: Super Team

    # Description of the team's purpose (required)
    role: Multi-disciplinary team for complex tasks

    # Agents in this team (must be defined in agents section)
    agents:
      - code
      - research
      - finance

    # Collaboration mode: coordinate or collaborate (default: coordinate)
    mode: collaborate

    # Rooms the team responds in
    rooms:
      - team-room

    # Model for team coordination (default: "default")
    model: sonnet
```

## Configuration Fields

| Field          | Required | Default      | Description                                       |
| -------------- | -------- | ------------ | ------------------------------------------------- |
| `display_name` | Yes      | -            | Human-readable name shown in Matrix               |
| `role`         | Yes      | -            | Description of the team's purpose                 |
| `agents`       | Yes      | -            | List of agent names that compose this team        |
| `mode`         | No       | `coordinate` | Collaboration mode: `coordinate` or `collaborate` |
| `rooms`        | No       | `[]`         | List of room names the team responds in           |
| `model`        | No       | `default`    | Model used for team coordination and synthesis    |

## When to Use Each Mode

| Mode          | Use Case                                      | Example                                                                                   |
| ------------- | --------------------------------------------- | ----------------------------------------------------------------------------------------- |
| `coordinate`  | Agents need to do different subtasks          | "Get weather and news" - coordinator assigns weather to one agent, news to another        |
| `collaborate` | Want diverse perspectives on the same problem | "What do you think about X?" - all agents analyze the same question and share their views |

## Dynamic Team Formation

When multiple agents are mentioned in a message (e.g., `@code @research analyze this`), MindRoom automatically forms an ad-hoc team. Dynamic teams form in these scenarios:

1. **Multiple agents explicitly tagged** - e.g., `@code @research analyze this`
1. **Thread with previously mentioned agents** - Follow-up messages in a thread where multiple agents were mentioned earlier
1. **Thread with multiple agent participants** - Continuing a conversation where multiple agents have responded
1. **DM room with multiple agents** - Messages in a DM room containing multiple agents (main timeline only)

### Mode Selection

For dynamic teams, the collaboration mode is selected by AI based on the task:

- Tasks with different subtasks for each agent use **coordinate** mode
- Tasks asking for opinions or brainstorming use **collaborate** mode

When AI mode selection is unavailable or fails, MindRoom falls back to:

- **coordinate** when multiple agents are explicitly tagged in the message (they likely have different roles to fulfill)
- **collaborate** for all other cases, such as agents from thread history or DM rooms (likely discussing the same topic)

# Culture Configuration

Cultures let a group of agents share evolving principles, practices, and conventions. A culture is backed by [Agno's CultureManager](https://docs.agno.com/agents/culture) and persists its knowledge in a SQLite database under `mindroom_data/culture/<culture_name>.db`.

## Basic Culture

```
cultures:
  engineering:
    description: Follow clean code principles and write tests
    agents: [developer, reviewer]
```

## Full Configuration

```
cultures:
  engineering:
    # Describes the shared principles this culture captures
    description: Follow clean code principles, write tests, and review before merging

    # Agents assigned to this culture (must be defined in agents section)
    agents:
      - developer
      - reviewer

    # How the culture is updated: automatic, agentic, or manual (default: automatic)
    mode: automatic
```

## Configuration Fields

| Field         | Required | Default       | Description                                                                                                             |
| ------------- | -------- | ------------- | ----------------------------------------------------------------------------------------------------------------------- |
| `description` | No       | `""`          | Description of the shared principles and practices the culture captures                                                 |
| `agents`      | No       | `[]`          | Agent names assigned to this culture (must exist in the `agents` section). Each agent can belong to at most one culture |
| `mode`        | No       | `"automatic"` | How culture knowledge is updated (see modes below)                                                                      |

## Culture Modes

| Mode        | Behavior                                                                                                  |
| ----------- | --------------------------------------------------------------------------------------------------------- |
| `automatic` | Culture knowledge is automatically extracted from every agent interaction and added to the shared context |
| `agentic`   | The agent decides when to update culture knowledge via a tool call                                        |
| `manual`    | Culture context is read-only; the description is included in agent context but knowledge is never updated |

All modes include the culture description in the agent's context. The difference is whether and how the culture's knowledge base evolves over time.

## Rules

- Each agent can belong to **at most one** culture. Assigning the same agent to multiple cultures is a validation error.
- All agents listed in a culture must exist in the top-level `agents` section.
- Culture state is persisted to `mindroom_data/culture/<culture_name>.db` and survives restarts.
- Culture managers are cached and shared across agents in the same culture — if two agents belong to the same culture, they share the same `CultureManager` instance.
- Changes to a culture's `description` or `mode` in `config.yaml` invalidate the cache, so the manager is recreated on the next hot-reload.

# Router Configuration

The router is a built-in system component that handles intelligent message routing and room management. It decides which agent should respond when no specific agent is mentioned, sends welcome messages to new rooms, and manages various system-level tasks.

## Configuration

```
router:
  # Model for routing decisions (defaults to "default")
  model: haiku
```

The router only has one configuration option:

| Option  | Type   | Default     | Description                        |
| ------- | ------ | ----------- | ---------------------------------- |
| `model` | string | `"default"` | Model to use for routing decisions |

## How Routing Works

When a message arrives in a room without a specific agent mention:

1. The router checks if there are configured agents in that room
1. It analyzes the message content and any recent thread context (up to 3 previous messages)
1. Based on the available agents' roles, tools, and instructions, it selects the best match
1. The router posts a message mentioning the selected agent (e.g., "@agent could you help with this?")
1. The mentioned agent sees the mention and responds in the thread

The router uses a structured output schema to ensure consistent routing decisions, including the agent name and reasoning for the selection.

## Router Responsibilities

The router is a special system agent that handles several important tasks beyond message routing:

### Command Handling

The router exclusively handles all commands:

- `!help [topic]` - Get help on commands or specific topics
- `!hi` - Show the welcome message again
- `!schedule <task>` - Schedule tasks and reminders
- `!list_schedules` - List scheduled tasks
- `!cancel_schedule <id>` - Cancel a scheduled task
- `!edit_schedule <id> <task>` - Edit an existing scheduled task
- `!config <operation>` - Manage configuration
- `!skill <name> [args]` - Run a skill by name

Even in single-agent rooms, commands are always processed by the router.

### Welcome Messages

When the router joins a room with no messages (or only a previous welcome message), it automatically sends a welcome message listing:

- All available agents in that room with their descriptions
- How to interact with agents (mentions, commands)
- Quick command reference

Use `!hi` in any room to see the welcome message again.

### Room Management

The router creates and manages rooms:

- Creates configured rooms that don't exist yet
- Invites agents and users to their configured rooms
- Applies `matrix_room_access` policy for managed rooms (when enabled)
- Generates AI-powered room topics based on configured agents
- Has admin privileges to manage room membership
- Cleans up orphaned bots on startup

By default (`matrix_room_access.mode: single_user_private`), rooms remain invite-only and private in the room directory. In `multi_user` mode, the router can set join rules (`public`/`knock`) and optionally publish rooms to the server directory.

### Voice Message Processing

Audio events are handled through the shared media pipeline on all bots. The router only posts a visible handoff when it must disambiguate between multiple eligible responders in a multi-agent room. When the responder is already clear, normalized audio follows the normal direct agent or team dispatch rules without an extra router message. Set `voice.visible_router_echo: true` if you also want the router to post the normalized voice text as a display-only message when it is allowed to reply. See [Voice Messages](https://docs.mindroom.chat/voice/index.md) for the detailed dispatch behavior.

### Configuration Confirmations

The router handles interactive configuration changes. When a config change is requested, the router posts a confirmation message with reactions, and only the router processes the confirmation reactions.

### Scheduled Task Restoration

When the router joins a room, it restores any previously scheduled tasks and pending configuration changes to ensure they persist across restarts.

## Routing Behavior Details

### Single-Agent Optimization

When there's only one agent configured in a room, the router skips AI routing entirely. The single agent handles messages directly, which is faster and more efficient.

### Multi-Human Thread Protection

When multiple human users have posted in a thread, the router and agents require an explicit `@mention` before responding. This prevents agents from injecting themselves into human-to-human conversations.

The rules are:

1. **Mentioned agents always respond** — an explicit `@agent` overrides all other rules.
1. **Non-thread messages** — agents auto-respond if they're the only agent in the room, regardless of how many humans are present.
1. **Threads with one human** — normal auto-response behavior applies (the agent continues the conversation).
1. **Threads with two or more humans** — agents stay silent unless explicitly mentioned.
1. **Mentioning a non-agent user** — if a message tags only humans (not agents), agents stay silent.

#### Bot accounts

By default, any Matrix user that is not a MindRoom agent counts as a "human" for the rules above. This includes bridge bots (Telegram, Slack, etc.) and other non-MindRoom bots. If a bridge bot relays a message into a thread, it looks like a second human to MindRoom and triggers the mention requirement.

To prevent this, list those accounts in `bot_accounts`:

```
bot_accounts:
  - "@telegram:example.com"
  - "@slackbot:example.com"
```

Accounts in this list are treated like MindRoom agents for response logic — their messages and mentions don't count toward the multi-human detection.

### Routing Fallback

If routing fails (model error, invalid suggestion, etc.), the router sends a helpful error message: "I couldn't determine which agent should help with this. Please try mentioning an agent directly with @ or rephrase your request."

Users can always mention agents directly with `@agent_name` to bypass routing.

## Note on the Router Agent

The router is always present and cannot be disabled. It automatically joins any room with configured agents. If no `router` section is configured, it uses the default model.

# Tools

MindRoom includes 100+ tool integrations that agents can use to interact with external services.

## Enabling Tools

Tools are enabled per-agent in the configuration:

```
agents:
  assistant:
    display_name: Assistant
    role: A helpful assistant with file and web access
    model: sonnet
    tools:
      - file
      - shell
      - github
      - duckduckgo
```

You can also assign tools to all agents globally:

```
defaults:
  tools:
    - scheduler
```

`defaults.tools` are merged into each agent's own `tools` list with duplicates removed. Set `defaults.tools: []` to disable global default tools, or set `agents.<name>.include_default_tools: false` to opt out a specific agent.

## Tool Categories

Tools are organized by category:

- **Development** - File operations, shell, Docker, GitHub, Jira, Python, Airflow, code execution sandboxes (E2B, Daytona, or MindRoom's built-in [container sandbox proxy](https://docs.mindroom.chat/deployment/sandbox-proxy/index.md)), Claude Agent SDK
- **Research** - Web search (DuckDuckGo, Tavily, Exa, SerpAPI), academic papers (arXiv, PubMed), Wikipedia, Hacker News, web scraping (Firecrawl, Crawl4AI, Jina)
- **Communication** - Slack, Discord, Telegram, Twilio, WhatsApp, Webex
- **Email** - Gmail, AWS SES, Resend, generic SMTP
- **Productivity** - Google Calendar, Todoist, Google Sheets, SQL, Pandas, CSV, DuckDB
- **Social** - Reddit, X/Twitter, Zoom
- **Entertainment** - YouTube, Giphy
- **Smart Home** - Home Assistant
- **Integrations** - Composio

## Quick Examples

### Research Agent

```
agents:
  researcher:
    display_name: Researcher
    role: Find and summarize information from the web and academic sources
    model: sonnet
    tools:
      - duckduckgo
      - arxiv
      - wikipedia
      - pubmed
```

### DevOps Agent

```
agents:
  devops:
    display_name: DevOps
    role: Manage infrastructure, containers, and deployments
    model: sonnet
    tools:
      - shell
      - docker
      - github
      - aws_lambda
```

### Communication Agent

```
agents:
  notifier:
    display_name: Notifier
    role: Send notifications and messages across platforms
    model: sonnet
    tools:
      - slack
      - telegram
      - gmail
```

## Automatic Dependency Installation

Each tool declares its Python dependencies as an optional extra in `pyproject.toml`. When an agent tries to use a tool whose dependencies aren't installed, MindRoom automatically installs them at runtime:

1. **Pre-check** — uses `importlib.util.find_spec()` to detect missing packages without importing anything
1. **Locked install** — runs `uv sync --locked --inexact --extra <tool>` to install exact pinned versions from `uv.lock`
1. **Fallback** — if no lockfile is available, falls back to `uv pip install` or `pip install`

This means you don't need to install all 100+ tool dependencies upfront — only the tools your agents actually use get installed.

To disable auto-install, set the environment variable:

```
MINDROOM_NO_AUTO_INSTALL_TOOLS=1
```

To pre-install specific tool dependencies:

```
uv sync --extra gmail --extra slack --extra github
```

See the full list in:

- [Built-in Tools](https://docs.mindroom.chat/tools/builtin/index.md) - Complete list of available built-in tools with configuration details
- [MCP (Planned)](https://docs.mindroom.chat/tools/mcp/index.md) - Native MCP status and plugin-based workaround
- [Plugins](https://docs.mindroom.chat/plugins/index.md) - Extend MindRoom with custom tools and skills (including MCP via plugin workaround)

# Built-in Tools

MindRoom includes 100+ built-in tool integrations organized by category.

## File & System

| Icon                | Tool              | Description                                                         | Config Required               |
| ------------------- | ----------------- | ------------------------------------------------------------------- | ----------------------------- |
| :lucide-folder-cog: | `file`            | Read, write, list, search, and manage local files                   | -                             |
| :lucide-folder-cog: | `shell`           | Execute shell commands                                              | -                             |
| :lucide-folder-cog: | `docker`          | Manage Docker containers and images                                 | -                             |
| :lucide-folder-cog: | `python`          | Execute Python code                                                 | -                             |
| :lucide-folder-cog: | `sql`             | Database query and management for SQL databases                     | `db_url` or connection params |
| :lucide-folder-cog: | `postgres`        | Query PostgreSQL databases - list tables, describe schemas, run SQL | Connection params             |
| :lucide-folder-cog: | `redshift`        | Query Amazon Redshift data warehouse                                | Connection params             |
| :lucide-folder-cog: | `neo4j`           | Query Neo4j graph databases with Cypher                             | `uri`, `user`, `password`     |
| :lucide-folder-cog: | `duckdb`          | Query data with DuckDB                                              | -                             |
| :lucide-folder-cog: | `pandas`          | Data manipulation with Pandas                                       | -                             |
| :lucide-folder-cog: | `csv`             | Read and write CSV files                                            | -                             |
| :lucide-folder-cog: | `calculator`      | Mathematical calculations                                           | -                             |
| :lucide-folder-cog: | `reasoning`       | Step-by-step reasoning scratchpad for structured problem solving    | -                             |
| :lucide-folder-cog: | `file_generation` | Generate JSON, CSV, PDF, and text files from data                   | -                             |
| :lucide-folder-cog: | `visualization`   | Create bar, line, pie charts, scatter plots, and histograms         | -                             |
| :lucide-folder-cog: | `sleep`           | Pause execution                                                     | -                             |

## Web Search & Research

| Icon            | Tool           | Description                         | Config Required |
| --------------- | -------------- | ----------------------------------- | --------------- |
| :lucide-search: | `duckduckgo`   | DuckDuckGo web search               | -               |
| :lucide-search: | `googlesearch` | Google search via WebSearch backend | -               |
| :lucide-search: | `baidusearch`  | Baidu search                        | -               |
| :lucide-search: | `tavily`       | Real-time web search API            | `api_key`       |
| :lucide-search: | `exa`          | AI-powered web search and research  | `api_key`       |
| :lucide-search: | `serpapi`      | Search API aggregator               | `api_key`       |
| :lucide-search: | `serper`       | Google search API                   | `api_key`       |
| :lucide-search: | `searxng`      | Self-hosted metasearch              | `host`          |
| :lucide-search: | `linkup`       | Link discovery                      | `api_key`       |

## Web Scraping & Crawling

| Icon           | Tool                | Description                         | Config Required      |
| -------------- | ------------------- | ----------------------------------- | -------------------- |
| :lucide-globe: | `firecrawl`         | Web scraping and crawling           | `api_key`            |
| :lucide-globe: | `crawl4ai`          | AI-powered web crawling             | -                    |
| :lucide-globe: | `browserbase`       | Cloud browser automation            | `api_key`            |
| :lucide-globe: | `agentql`           | Structured web scraping             | `api_key`            |
| :lucide-globe: | `spider`            | Web spider/crawler                  | `api_key`            |
| :lucide-globe: | `scrapegraph`       | Graph-based scraping                | `api_key`            |
| :lucide-globe: | `apify`             | Web scraping platform               | `api_key`            |
| :lucide-globe: | `brightdata`        | Proxy and scraping                  | `api_key`            |
| :lucide-globe: | `oxylabs`           | Web scraping proxy                  | `api_key`            |
| :lucide-globe: | `jina`              | Web content reading and search      | `api_key` (optional) |
| :lucide-globe: | `website`           | Simple web fetching                 | -                    |
| :lucide-globe: | `trafilatura`       | Web content and metadata extraction | -                    |
| :lucide-globe: | `newspaper4k`       | Article extraction                  | -                    |
| :lucide-globe: | `web_browser_tools` | Browser automation                  | -                    |

## AI & ML APIs

| Icon              | Tool          | Description                                                                | Config Required |
| ----------------- | ------------- | -------------------------------------------------------------------------- | --------------- |
| :lucide-sparkles: | `openai`      | Transcription, image generation, and speech synthesis                      | `api_key`       |
| :lucide-sparkles: | `gemini`      | Google AI for image and video generation                                   | `api_key`       |
| :lucide-sparkles: | `groq`        | Fast AI inference for audio transcription, translation, and text-to-speech | `api_key`       |
| :lucide-sparkles: | `replicate`   | Generate images and videos using AI models                                 | `api_key`       |
| :lucide-sparkles: | `fal`         | AI media generation (images and videos)                                    | `api_key`       |
| :lucide-sparkles: | `dalle`       | DALL-E image generation                                                    | `api_key`       |
| :lucide-sparkles: | `cartesia`    | Text-to-speech and voice localization                                      | `api_key`       |
| :lucide-sparkles: | `eleven_labs` | Text-to-speech and sound effects                                           | `api_key`       |
| :lucide-sparkles: | `desi_vocal`  | Hindi and Indian language text-to-speech                                   | `api_key`       |
| :lucide-sparkles: | `lumalabs`    | 3D content creation and video generation                                   | `api_key`       |
| :lucide-sparkles: | `modelslabs`  | Generate videos, audio, and GIFs from text                                 | `api_key`       |

## Knowledge & Research

| Icon               | Tool         | Description                                             | Config Required |
| ------------------ | ------------ | ------------------------------------------------------- | --------------- |
| :lucide-book-open: | `arxiv`      | Search and read academic papers from ArXiv              | -               |
| :lucide-book-open: | `wikipedia`  | Search and retrieve information from Wikipedia          | -               |
| :lucide-book-open: | `pubmed`     | Search and retrieve medical and life science literature | -               |
| :lucide-book-open: | `hackernews` | Get top stories and user details from Hacker News       | -               |

## Communication & Social

| Icon                    | Tool             | Description                                                                                                          | Config Required                            |
| ----------------------- | ---------------- | -------------------------------------------------------------------------------------------------------------------- | ------------------------------------------ |
| :lucide-message-square: | `matrix_message` | Native Matrix messaging actions (`send`, `reply`, `thread-reply`, `react`, `read`, `thread-list`, `edit`, `context`) | -                                          |
| :lucide-message-square: | `gmail`          | Read, search, and manage Gmail emails                                                                                | Google OAuth                               |
| :lucide-message-square: | `slack`          | Send messages and manage channels                                                                                    | `token`                                    |
| :lucide-message-square: | `discord`        | Interact with Discord channels and servers                                                                           | `bot_token`                                |
| :lucide-message-square: | `telegram`       | Send messages via Telegram bot                                                                                       | `token`, `chat_id`                         |
| :lucide-message-square: | `whatsapp`       | WhatsApp Business API messaging                                                                                      | `access_token`, `phone_number_id`          |
| :lucide-message-square: | `twilio`         | SMS and voice                                                                                                        | `account_sid`, `auth_token`                |
| :lucide-message-square: | `webex`          | Webex Teams messaging                                                                                                | `access_token`                             |
| :lucide-message-square: | `resend`         | Transactional email                                                                                                  | `api_key`                                  |
| :lucide-message-square: | `email`          | Generic SMTP email                                                                                                   | SMTP config                                |
| :lucide-message-square: | `x`              | Post tweets, send DMs, and search X/Twitter                                                                          | `bearer_token` or OAuth credentials        |
| :lucide-message-square: | `reddit`         | Reddit browsing and interaction                                                                                      | `client_id`, `client_secret`               |
| :lucide-message-square: | `zoom`           | Video conferencing and meetings                                                                                      | `account_id`, `client_id`, `client_secret` |

## Project Management

| Icon            | Tool         | Description                                          | Config Required                                 |
| --------------- | ------------ | ---------------------------------------------------- | ----------------------------------------------- |
| :lucide-kanban: | `github`     | Repository and issue management                      | `access_token`                                  |
| :lucide-kanban: | `bitbucket`  | Bitbucket repository, PR, and issue management       | `username`, `password` or `token`               |
| :lucide-kanban: | `jira`       | Issue tracking and project management                | `server_url`, `username`, `password` or `token` |
| :lucide-kanban: | `linear`     | Issue tracking and project management                | `api_key`                                       |
| :lucide-kanban: | `clickup`    | ClickUp task, space, and list management             | `api_key`, `master_space_id`                    |
| :lucide-kanban: | `confluence` | Retrieve, create, and update wiki pages              | `url`, `username`, `password` or `api_key`      |
| :lucide-kanban: | `notion`     | Create, update, and search pages in Notion databases | `api_key`, `database_id`                        |
| :lucide-kanban: | `trello`     | Trello boards                                        | `api_key`, `token`                              |
| :lucide-kanban: | `todoist`    | Todoist task management                              | `api_token`                                     |
| :lucide-kanban: | `zendesk`    | Search help center articles                          | `username`, `password`, `company_name`          |

## Calendar & Scheduling

| Icon              | Tool              | Description                                          | Config Required |
| ----------------- | ----------------- | ---------------------------------------------------- | --------------- |
| :lucide-calendar: | `google_calendar` | View and schedule meetings                           | Google OAuth    |
| :lucide-calendar: | `cal_com`         | Cal.com scheduling                                   | `api_key`       |
| :lucide-calendar: | `scheduler`       | Schedule, edit, list, and cancel tasks and reminders | -               |

## Data & Business

| Icon                  | Tool                     | Description                                          | Config Required             |
| --------------------- | ------------------------ | ---------------------------------------------------- | --------------------------- |
| :lucide-chart-column: | `google_sheets`          | Read, create, update spreadsheets                    | Google OAuth                |
| :lucide-chart-column: | `yfinance`               | Financial data                                       | -                           |
| :lucide-chart-column: | `openbb`                 | Stock prices, company news, price targets via OpenBB | `openbb_pat` (optional)     |
| :lucide-chart-column: | `shopify`                | Shopify store sales data, products, orders           | `shop_name`, `access_token` |
| :lucide-chart-column: | `financial_datasets_api` | Financial datasets                                   | `api_key`                   |

## Location & Maps

| Icon                | Tool          | Description     | Config Required |
| ------------------- | ------------- | --------------- | --------------- |
| :lucide-map-pinned: | `google_maps` | Maps and places | `api_key`       |
| :lucide-map-pinned: | `openweather` | Weather data    | `api_key`       |

## DevOps & Infrastructure

| Icon            | Tool              | Description                                                   | Config Required                  |
| --------------- | ----------------- | ------------------------------------------------------------- | -------------------------------- |
| :lucide-server: | `aws_lambda`      | AWS Lambda functions                                          | AWS credentials                  |
| :lucide-server: | `aws_ses`         | AWS email service                                             | AWS credentials                  |
| :lucide-server: | `airflow`         | Apache Airflow DAG file management                            | -                                |
| :lucide-server: | `e2b`             | Code execution sandbox                                        | `api_key`                        |
| :lucide-server: | `daytona`         | Development environments                                      | `api_key`                        |
| :lucide-server: | `claude_agent`    | Persistent Claude coding sessions with tool use and subagents | `api_key` (recommended)          |
| :lucide-server: | `composio`        | API composition                                               | `api_key`                        |
| :lucide-server: | `google_bigquery` | Query Google BigQuery - list tables, schemas, run SQL         | `dataset`, `project`, `location` |

## Smart Home

| Icon           | Tool            | Description                            | Config Required                            |
| -------------- | --------------- | -------------------------------------- | ------------------------------------------ |
| :lucide-house: | `homeassistant` | Control and monitor smart home devices | `HOMEASSISTANT_URL`, `HOMEASSISTANT_TOKEN` |

## Media & Entertainment

| Icon                  | Tool                  | Description                                          | Config Required |
| --------------------- | --------------------- | ---------------------------------------------------- | --------------- |
| :lucide-clapperboard: | `youtube`             | Extract video data, captions, and timestamps         | -               |
| :lucide-clapperboard: | `spotify`             | Search tracks, manage playlists, get recommendations | `access_token`  |
| :lucide-clapperboard: | `giphy`               | GIF search                                           | `api_key`       |
| :lucide-clapperboard: | `moviepy_video_tools` | Video processing                                     | -               |
| :lucide-clapperboard: | `unsplash`            | Search and retrieve royalty-free images              | `access_key`    |
| :lucide-clapperboard: | `brandfetch`          | Retrieve brand logos, colors, and fonts by domain    | `api_key`       |

## Memory & Storage

| Icon               | Tool          | Description                                                                                                                                        | Config Required                |
| ------------------ | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------ |
| :lucide-database:  | `memory`      | Explicitly store and search agent memories on demand                                                                                               | -                              |
| :lucide-database:  | `mem0`        | Persistent memory system                                                                                                                           | `api_key` (optional for cloud) |
| :lucide-database:  | `zep`         | Conversation memory                                                                                                                                | `api_key`                      |
| :lucide-paperclip: | `attachments` | List and register context-scoped file attachments (send via `matrix_message`) (see [Attachments](https://docs.mindroom.chat/attachments/index.md)) | -                              |

## Custom & Config

| Icon                        | Tool             | Description                                   | Config Required |
| --------------------------- | ---------------- | --------------------------------------------- | --------------- |
| :lucide-sliders-horizontal: | `custom_api`     | Custom API calls                              | Varies          |
| :lucide-sliders-horizontal: | `config_manager` | MindRoom configuration management             | -               |
| :lucide-workflow:           | `subagents`      | Spawn and communicate with sub-agent sessions | -               |

Tool presets are config-only macros, not runtime tools. For OpenClaw workspace portability, `openclaw_compat` expands to `shell`, `coding`, `duckduckgo`, `website`, `browser`, `scheduler`, `subagents`, `matrix_message`, and `attachments`.

## Claude Agent Sessions

The `claude_agent` tool manages long-lived Claude coding sessions on the backend. This allows iterative coding workflows in the same session (including Claude-side tool usage and subagents).

When using the OpenAI-compatible API, set `X-Session-Id` to keep tool sessions stable across requests. See [OpenAI API Compatibility](https://docs.mindroom.chat/openai-api/#session-continuity).

Add `claude_agent` to an agent's tools in `config.yaml`:

```
agents:
  code:
    display_name: Code Agent
    role: Coding assistant with persistent Claude sessions
    model: general
    tools:
      - claude_agent
```

Configure credentials via the dashboard or by writing `mindroom_data/credentials/claude_agent_credentials.json`:

```
{
  "api_key": "sk-ant-or-proxy-key",
  "model": "claude-sonnet-4-5",
  "permission_mode": "default",
  "continue_conversation": true,
  "session_ttl_minutes": 60,
  "max_sessions": 200
}
```

To run through an Anthropic-compatible gateway (for example LiteLLM `/v1/messages`):

```
{
  "api_key": "sk-dummy",
  "anthropic_base_url": "http://litellm.local",
  "anthropic_auth_token": "sk-dummy",
  "disable_experimental_betas": true
}
```

Use the gateway host root for `anthropic_base_url` (no `/v1` suffix), because Claude clients append `/v1/messages`. Some Anthropic-compatible backends may reject Claude's `anthropic-beta` headers. Set `disable_experimental_betas` to `true` in that case.

## Enabling Tools

Add tools to agents in `config.yaml`:

```
agents:
  assistant:
    display_name: Assistant
    role: A helpful assistant
    model: sonnet
    tools:
      - file
      - shell
      - duckduckgo
      - github
```

Or use the Dashboard's Agents tab to enable tools visually.

## Environment Variables

Most tools require API keys or credentials. Set them in your `.env` file:

```
# Search
TAVILY_API_KEY=tvly-...
EXA_API_KEY=...

# Communication
SLACK_BOT_TOKEN=xoxb-...
GITHUB_TOKEN=ghp_...

# AI Services
OPENAI_API_KEY=sk-...
REPLICATE_API_TOKEN=r8_...
```

MindRoom automatically loads `.env` files from the working directory.

# MCP (Planned)

> [!WARNING] MindRoom does not currently support direct MCP server configuration in `config.yaml`.

MCP can still be used today through the plugin system by wrapping Agno `MCPTools` in a plugin tool factory.

See [Plugins](https://docs.mindroom.chat/plugins/#mcp-via-plugins-advanced) for the current workaround and setup instructions.

This page remains as a compatibility pointer and will be expanded when native MCP support is added.

# OpenClaw Workspace Import

MindRoom supports a practical OpenClaw-compatible workflow focused on workspace portability:

- Reuse your OpenClaw markdown files (`SOUL.md`, `AGENTS.md`, `USER.md`, `MEMORY.md`, etc.)
- Use the `openclaw_compat` preset to enable a native MindRoom tool bundle
- Use MindRoom's unified memory backend (`memory.backend`) for persistence
- Optionally add semantic recall over workspace files via knowledge bases

## What this is (and is not)

MindRoom is compatible with OpenClaw workspace patterns, not a full OpenClaw gateway clone.

Works well:

- File-based identity and memory documents
- OpenClaw-inspired behavior and instructions
- Native MindRoom tool bundle via the `openclaw_compat` preset
- Native Matrix messaging via the `matrix_message` tool in the preset bundle
- Native sub-agent session orchestration via the `subagents` tool in the preset bundle

Not included:

- OpenClaw gateway control plane
- Device nodes and canvas platform tools
- OpenClaw alias-name wrapper APIs like `exec`, `process`, `web_search`, and `web_fetch`
- `tts` and `image` aliases (use MindRoom's native TTS/image tools directly)
- Heartbeat runtime - schedule heartbeats via `cron`/`scheduler` instead

## The `openclaw_compat` preset

`openclaw_compat` is a config macro, not a runtime toolkit. `Config.get_agent_tools` expands it into native MindRoom tools and dedupes while preserving order.

Preset expansion:

- `shell`
- `coding`
- `duckduckgo`
- `website`
- `browser`
- `scheduler`
- `subagents`
- `matrix_message`
- `attachments`

Memory is not a separate OpenClaw subsystem in MindRoom. It uses the normal MindRoom memory backend.

## Drop-in config

Use this as a starting point for importing an OpenClaw workspace:

```
agents:
  openclaw:
    display_name: OpenClawAgent
    include_default_tools: false
    learning: false
    memory_backend: file
    memory_file_path: ./openclaw_data
    model: opus
    role: OpenClaw-style personal assistant with persistent file-based identity and memory.
    rooms: [personal]

    instructions:
      - You wake up fresh each session with no memory of previous conversations. Your context files are already loaded into your system prompt.
      - Important long-term context is persisted by the configured MindRoom memory backend. If something must be preserved exactly, write/update the relevant file directly.
      - MEMORY.md is curated long-term memory; daily files are short-lived notes and logs.
      - Ask before external/public actions and destructive operations.
      - Before answering prior-history questions, search memory files first with `search_knowledge_base` when configured.

    context_files:
      - ./openclaw_data/SOUL.md
      - ./openclaw_data/AGENTS.md
      - ./openclaw_data/USER.md
      - ./openclaw_data/IDENTITY.md
      - ./openclaw_data/TOOLS.md
      - ./openclaw_data/HEARTBEAT.md

    knowledge_bases: [openclaw_memory]

    tools:
      - openclaw_compat
      - python

    skills:
      - transcribe

knowledge_bases:
  openclaw_memory:
    path: ./openclaw_data/memory
    watch: true

memory:
  file:
    max_entrypoint_lines: 200
  auto_flush:
    enabled: true
```

`memory_file_path` points the file-memory scope directly at the workspace root, so `MEMORY.md` is loaded automatically by the file backend as the entrypoint — no need to list it in `context_files`. `memory_file_path` is ignored unless the effective backend is `file`; if you switch this agent to `mem0`, re-add `MEMORY.md` to `context_files` when you still want it preloaded. The `openclaw_compat` preset already expands to native shell, coding, search/fetch, browser, scheduler, sub-agent orchestration, `matrix_message`, and `attachments` tools, so listing those tools individually is not necessary.

## Recommended workspace layout

```
openclaw_data/
├── SOUL.md
├── AGENTS.md
├── USER.md
├── IDENTITY.md
├── MEMORY.md
├── TOOLS.md
├── HEARTBEAT.md
└── memory/
    ├── YYYY-MM-DD.md
    └── topic-notes.md
```

## Unified memory behavior

OpenClaw-compatible agents use the same memory system as every other MindRoom agent:

- `memory.backend: mem0` for vector memory (global default)
- `memory.backend: file` for file-first memory (global default)
- `memory_backend: file` on an individual agent to override the global default
- `memory_file_path: ./openclaw_data` to point the file-memory scope at an existing workspace directory instead of the default `<root>/agent_<name>/`
- Agents that use file memory without `memory_file_path` continue to use the global `memory.file.path` (or the default `<storage_path>/memory_files/`)
- optional `knowledge_bases` for semantic recall over arbitrary workspace folders

Recommended for OpenClaw-style setups: `memory_backend: file` with `memory_file_path` pointing at the workspace root and `memory.auto_flush.enabled: true`.

## Context Management

MindRoom includes built-in context controls for OpenClaw-style agents:

- **Conversation history** is managed by Agno's session system - previous turns (including tool calls and results) are automatically replayed. Control depth with `num_history_runs` or `num_history_messages` (see [Agents](https://docs.mindroom.chat/configuration/agents/index.md)).
- **Preloaded role context** from `context_files` is hard-capped by `defaults.max_preload_chars`.

## Known limitations

**Threading model:** MindRoom responds in Matrix threads by default. OpenClaw uses continuous room-level conversations. To match this behavior on mobile or via bridges (Telegram, Signal, WhatsApp), set `thread_mode: room` on the agent - this sends plain room messages with a single persistent session per room instead of creating threads.

## Privacy guidance

`context_files` apply to all rooms for that agent. If `MEMORY.md` is sensitive:

- Keep the agent in private rooms only, or
- Split into private/public agents and exclude sensitive files from the public agent

## Skills

Skills are loaded from `~/.mindroom/skills/<name>/`. To use an OpenClaw skill like `transcribe`, copy the skill directory from your OpenClaw workspace:

```
mkdir -p ~/.mindroom/skills
cp -r /path/to/openclaw-workspace/skills/transcribe ~/.mindroom/skills/
```

Set required environment variables (for example `WHISPER_URL`) as defined in the skill's `SKILL.md` frontmatter.

# Skills

MindRoom uses Agno's skills system with OpenClaw-compatible metadata. Skills are instruction packs (a `SKILL.md` file) with optional scripts and references that guide agents without adding new code capabilities.

## Skill directory structure

A skill is a directory containing:

```
my-skill/
├── SKILL.md         # Required: frontmatter + instructions
├── scripts/         # Optional: executable scripts
│   └── audit.sh
└── references/      # Optional: reference documents
    └── examples.md
```

Agents access skills via `get_skill_instructions()`, scripts via `get_skill_script()`, and references via `get_skill_reference()`.

## SKILL.md format (OpenClaw compatible)

```
---
name: repo-quick-audit
description: Quick repository audit checklist
metadata: '{openclaw:{requires:{bins:["git"], env:["GITHUB_TOKEN"]}}}'
user-invocable: true
disable-model-invocation: false
command-dispatch: tool
command-tool: repo_audit.run
command-arg-mode: raw
---

# Repo Quick Audit

1. Check CI status
2. Review open issues
```

Notes:

- `metadata` can be a JSON5 string (shown above) or a YAML mapping.
- `user-invocable`, `disable-model-invocation`, and `command-*` also accept snake_case names.

## Frontmatter fields

| Field                      | Type                    | Description                                                                                                            |
| -------------------------- | ----------------------- | ---------------------------------------------------------------------------------------------------------------------- |
| `name`                     | string                  | Unique skill identifier                                                                                                |
| `description`              | string                  | Brief summary shown to users/models                                                                                    |
| `metadata`                 | mapping or JSON5 string | OpenClaw metadata and custom fields                                                                                    |
| `user-invocable`           | bool                    | Allow `!skill` (default: true)                                                                                         |
| `disable-model-invocation` | bool                    | Prevent model invocation (default: false)                                                                              |
| `command-dispatch`         | `"tool"`                | Set to `tool` to run a tool directly                                                                                   |
| `command-tool`             | string                  | Function to call: `function_name`, `toolkit.function_name`, or `toolkit` (if the toolkit exposes exactly one function) |
| `command-arg-mode`         | `"raw"`                 | Argument passing mode; only `raw` is currently supported                                                               |
| `license`                  | string                  | Optional license information                                                                                           |
| `compatibility`            | string                  | Optional compatibility requirements                                                                                    |
| `allowed-tools`            | list                    | Optional list of tools this skill is allowed to use                                                                    |

## Eligibility gating (OpenClaw metadata)

If `metadata.openclaw` is present, MindRoom filters skills using these rules:

- `always: true` bypasses all checks
- `os: ["linux", "darwin", "windows"]`
- `requires.env`: env var set or credential key exists
- `requires.config`: config path is truthy (e.g., `agents.code.tools`)
- `requires.bins`: all binaries must exist in PATH
- `requires.anyBins`: at least one binary must exist in PATH

Skills without `metadata.openclaw` are always eligible.

## Skill locations and precedence

MindRoom loads skills from these locations, in this order:

1. Bundled skills: `skills/` at the repository root (if present)
1. Plugin-provided skill directories (see [Plugins](https://docs.mindroom.chat/plugins/index.md))
1. User skills: `~/.mindroom/skills/`

If multiple skills share the same name, the last one wins (user > plugin > bundled).

## Configuring skills

Add skills to an agent allowlist in `config.yaml`:

```
agents:
  developer:
    display_name: Developer
    role: A coding assistant
    model: sonnet
    skills:
      - repo-quick-audit
      - code-review
```

If `skills` is empty or unset, the agent gets no skills.

## Using skills at runtime

Agents see available skills in the system prompt and can load details using these tools:

- `get_skill_instructions(skill_name)` - Load the full instructions for a skill
- `get_skill_reference(skill_name, reference_path)` - Access reference documentation
- `get_skill_script(skill_name, script_path, execute=False, args=None, timeout=30)` - Read or execute scripts

## Skill command dispatch (`!skill`)

Users can run a skill by name:

```
!skill repo-quick-audit --recent
```

Agent resolution:

- If you mention an agent (e.g., `@mindroom_code !skill build`), that agent handles the skill.
- If only one agent in the room has the skill enabled, it handles the request.
- If multiple agents have the skill, you must mention one to disambiguate.

Rules:

- The skill must be in the agent allowlist and `user-invocable` must be `true`.
- If `command-dispatch: tool` is set, MindRoom runs the tool directly.
- If `disable-model-invocation: true` and no tool dispatch is configured, the command fails.

## Skill vs tool

| Aspect       | Skills                    | Tools            |
| ------------ | ------------------------- | ---------------- |
| Definition   | Markdown + YAML           | Python code      |
| Location     | File system               | Code/plugins     |
| Filtering    | Automatic by requirements | Always available |
| Instructions | Rich markdown             | Docstrings       |
| Invocation   | User or model             | Model only       |

## Hot reloading

MindRoom polls skill directories every second. When a `SKILL.md` file is added, removed, or modified, the skill cache is automatically cleared so agents pick up the new instructions on their next request.

## Best practices

1. Keep skills focused - one skill per capability
1. Declare dependencies with `metadata.openclaw.requires`
1. Use descriptive names like `code-review`

# Plugins

MindRoom plugins add tools and can optionally ship skills. Plugins are loaded from paths listed in `config.yaml`.

## Plugin structure

A plugin is a directory containing `mindroom.plugin.json`:

```
my-plugin/
├── mindroom.plugin.json
├── tools.py
└── skills/
    └── my-skill/
        └── SKILL.md
```

## Manifest format

```
{
  "name": "my-plugin",
  "tools_module": "tools.py",
  "skills": ["skills"]
}
```

| Field          | Type            | Description                                       |
| -------------- | --------------- | ------------------------------------------------- |
| `name`         | string          | Plugin identifier (required)                      |
| `tools_module` | string          | Path to the tools module (optional)               |
| `skills`       | list of strings | Relative directories containing skills (optional) |

Unknown fields are ignored.

## Configure plugins

Add plugin paths to `config.yaml`:

```
plugins:
  - ./plugins/my-plugin
  - python:my_skill_pack
```

Paths may be:

- Absolute paths
- Paths relative to `config.yaml`
- Python package specs (see below)

## Python package plugins

MindRoom can resolve plugins from installed Python packages:

```
plugins:
  - my_skill_pack
  - python:my_skill_pack
  - pkg:my_skill_pack:plugins/demo
  - module:my_skill_pack:plugins/demo
```

Rules:

- A bare package name is allowed if it contains no slashes.
- `python:`, `pkg:`, and `module:` are explicit prefixes.
- `:sub/path` points to a subdirectory inside the package.

MindRoom resolves the package location and looks for `mindroom.plugin.json` in that directory.

## MCP via plugins (advanced)

MindRoom does not yet support direct MCP server configuration in `config.yaml`. If you need MCP today, wrap Agno `MCPTools` in a plugin tool factory:

```
from agno.tools.mcp import MCPTools
from mindroom.tool_system.metadata import (
    SetupType,
    ToolCategory,
    ToolStatus,
    register_tool_with_metadata,
)


class FilesystemMCPTools(MCPTools):
    def __init__(self, **kwargs):
        super().__init__(
            command="npx -y @modelcontextprotocol/server-filesystem /path/to/dir",
            **kwargs,
        )


@register_tool_with_metadata(
    name="mcp_filesystem",
    display_name="MCP Filesystem",
    description="Tools from an MCP filesystem server",
    category=ToolCategory.DEVELOPMENT,
    status=ToolStatus.AVAILABLE,
    setup_type=SetupType.NONE,
)
def mcp_filesystem_tools():
    return FilesystemMCPTools
```

Reference the plugin and tool in `config.yaml`:

```
plugins:
  - ./plugins/mcp-filesystem

agents:
  assistant:
    tools:
      - mcp_filesystem
```

The factory function must return the toolkit class, not an instance. MCP toolkits are async; Agno's async agent runs (`arun`, `aprint_response`) handle MCP connect and disconnect automatically.

## Tools module example

```
from __future__ import annotations

from typing import TYPE_CHECKING

from mindroom.tool_system.metadata import (
    SetupType,
    ToolCategory,
    ToolStatus,
    register_tool_with_metadata,
)

if TYPE_CHECKING:
    from agno.tools import Toolkit


@register_tool_with_metadata(
    name="greeter",
    display_name="Greeter",
    description="A simple greeting tool",
    category=ToolCategory.DEVELOPMENT,
    status=ToolStatus.AVAILABLE,
    setup_type=SetupType.NONE,
)
def greeter_tools() -> type[Toolkit]:
    from agno.tools import Toolkit

    class GreeterTools(Toolkit):
        """A simple greeting toolkit."""

        def __init__(self) -> None:
            super().__init__(name="greeter", tools=[self.greet])

        def greet(self, name: str) -> str:
            """Greet someone by name."""
            return f"Hello, {name}!"

    return GreeterTools
```

The factory function (decorated with `@register_tool_with_metadata`) must return the **class**, not an instance. MindRoom instantiates the class when building agents.

All decorator arguments are keyword-only. Required fields:

- `name`: Tool identifier
- `display_name`: Human-readable name
- `description`: Brief description
- `category`: A `ToolCategory` enum value

Common optional fields:

- `status`: `ToolStatus.AVAILABLE` (default), `COMING_SOON`, or `REQUIRES_CONFIG`
- `setup_type`: `SetupType.NONE` (default), `API_KEY`, `OAUTH`, or `SPECIAL`
- `config_fields`: List of `ConfigField` objects for configuration
- `dependencies`: List of required pip packages
- `docs_url`: Link to documentation

## Plugin skills

List skill directories in the manifest `skills` array. Those directories are added to the skill search roots.

## Reloading plugins

Plugin manifests and tools modules are cached by mtime. Changes are picked up the next time MindRoom reloads the tool registry (for example, on startup or config reload).

## Security notes

Plugins execute code in-process. Only install plugins you trust.

# Knowledge Bases

Knowledge bases give your agents access to your own documents through RAG (Retrieval-Augmented Generation). Drop files into a folder, point a knowledge base at it, and agents can search the indexed content when answering questions.

## How It Works

1. You configure a knowledge base pointing to a folder of documents
1. MindRoom indexes the files into a vector database (ChromaDB) using an embedder
1. Agents assigned to that knowledge base get a search tool that queries the indexed documents
1. When the agent uses the tool, relevant document chunks are included in its context

```
Indexing (startup + file changes):

  ┌──────────────┐      ┌──────────┐      ┌──────────┐
  │ Files/Folder │ ───▶ │ Embedder │ ───▶ │ ChromaDB │
  └──────────────┘      └──────────┘      └──────────┘
         ▲
         │ file watcher
         │ git sync

Querying (agentic RAG):

  ┌───────┐  search   ┌──────────┐
  │ Agent │ ────────▶ │ ChromaDB │
  │       │ ◀──────── │          │
  └───────┘  chunks   └──────────┘
```

## Quick Start

Add a knowledge base and assign it to an agent:

```
knowledge_bases:
  docs:
    path: ./knowledge_docs
    watch: true
    chunk_size: 5000
    chunk_overlap: 0

agents:
  assistant:
    display_name: Assistant
    role: A helpful assistant with access to our docs
    knowledge_bases: [docs]
```

Place files in `./knowledge_docs/` and they'll be indexed automatically on startup. When `watch: true`, new or modified files are re-indexed in real time.

## Configuration

### Basic Knowledge Base

```
knowledge_bases:
  my_docs:
    path: ./knowledge_docs/my_docs   # Folder containing documents
    watch: true                       # Auto-reindex on file changes
    chunk_size: 5000                  # Max characters per chunk
    chunk_overlap: 0                  # Overlap between adjacent chunks
```

| Field           | Type   | Default            | Description                                                         |
| --------------- | ------ | ------------------ | ------------------------------------------------------------------- |
| `path`          | string | `./knowledge_docs` | Folder path (relative to the config file directory or absolute)     |
| `watch`         | bool   | `true`             | Watch for filesystem changes and reindex automatically              |
| `chunk_size`    | int    | `5000`             | Maximum characters per chunk for text-like files (minimum: `128`)   |
| `chunk_overlap` | int    | `0`                | Overlap characters between adjacent chunks (must be `< chunk_size`) |
| `git`           | object | `null`             | Optional Git repository sync settings                               |

Use smaller `chunk_size` values when your embedding server has lower token or batch limits. If chunking is too large, indexing retries will fail with embedder 500 errors.

### Multiple Knowledge Bases

You can define multiple knowledge bases and assign them to different agents:

```
knowledge_bases:
  engineering:
    path: ./knowledge_docs/engineering
    watch: true
    chunk_size: 5000
    chunk_overlap: 0
  product:
    path: ./knowledge_docs/product
    watch: true
    chunk_size: 5000
    chunk_overlap: 0
  legal:
    path: ./knowledge_docs/legal
    watch: false
    chunk_size: 1000
    chunk_overlap: 100

agents:
  developer:
    display_name: Developer
    role: Engineering assistant
    knowledge_bases: [engineering]

  pm:
    display_name: Product Manager
    role: Product planning assistant
    knowledge_bases: [product, engineering]  # Can access multiple bases

  compliance:
    display_name: Compliance
    role: Legal and compliance reviewer
    knowledge_bases: [legal]
```

When an agent has multiple knowledge bases, results are interleaved fairly so no single base dominates the top results.

## Git-Backed Knowledge Bases

Knowledge bases can sync from a Git repository. MindRoom clones the repo on first run and periodically pulls updates.

```
knowledge_bases:
  pipefunc_docs:
    path: ./knowledge_docs/pipefunc
    watch: false
    chunk_size: 1200
    chunk_overlap: 120
    git:
      repo_url: https://github.com/pipefunc/pipefunc
      branch: main
      poll_interval_seconds: 300
      skip_hidden: true
      include_patterns:
        - "docs/**"
```

### Git Configuration Fields

| Field                   | Type   | Default    | Description                                          |
| ----------------------- | ------ | ---------- | ---------------------------------------------------- |
| `repo_url`              | string | *required* | HTTPS repository URL to clone/fetch                  |
| `branch`                | string | `main`     | Branch to track                                      |
| `poll_interval_seconds` | int    | `300`      | How often to check for updates (minimum: 5)          |
| `credentials_service`   | string | `null`     | Service name in CredentialsManager for private repos |
| `skip_hidden`           | bool   | `true`     | Skip files/folders starting with `.`                 |
| `include_patterns`      | list   | `[]`       | Root-anchored glob patterns to include               |
| `exclude_patterns`      | list   | `[]`       | Root-anchored glob patterns to exclude               |

### Sync Behavior

- On startup, the repo is cloned (or fetched if it already exists)
- Every `poll_interval_seconds`, MindRoom runs `git fetch` + `git reset --hard origin/<branch>`
- Local uncommitted changes in the checkout folder are discarded on each sync
- Only changed files are re-indexed (not the entire repo each time)
- Deleted files are automatically removed from the index
- Git polling runs regardless of the `watch` setting — `watch` controls only local filesystem events

### File Filtering with Patterns

Patterns are matched from the repository root. `*` matches one path segment, `**` matches zero or more segments.

```
knowledge_bases:
  project_docs:
    path: ./knowledge_docs/project
    git:
      repo_url: https://github.com/org/project
      include_patterns:
        - "docs/**"                    # All files under docs/
        - "README.md"                  # Root README only
        - "content/posts/*/index.md"   # Specific nested files
      exclude_patterns:
        - "docs/internal/**"           # Exclude internal docs
```

- If `include_patterns` is empty, all non-hidden files are eligible
- If `include_patterns` is set, a file must match at least one pattern
- `exclude_patterns` are applied last and remove matching files

### Private Repository Authentication

For private HTTPS repositories, store credentials and reference them in the config.

**Step 1:** Store credentials via the API or Dashboard (Credentials tab):

```
curl -X POST http://localhost:8765/api/credentials/github_private \
  -H "Content-Type: application/json" \
  -d '{"credentials":{"username":"x-access-token","token":"ghp_your_token_here"}}'
```

**Step 2:** Reference the service name in your knowledge base config:

```
knowledge_bases:
  private_docs:
    path: ./knowledge_docs/private
    git:
      repo_url: https://github.com/org/private-repo
      credentials_service: github_private
```

Accepted credential fields:

| Fields                  | Notes                                           |
| ----------------------- | ----------------------------------------------- |
| `username` + `token`    | Standard GitHub/GitLab access token auth        |
| `username` + `password` | Basic HTTP auth                                 |
| `api_key`               | Uses `x-access-token` as username automatically |

## Embedder Configuration

Knowledge bases use the same embedder configured in the `memory` section:

```
memory:
  embedder:
    provider: openai        # or "ollama"
    config:
      model: text-embedding-3-small
      host: null             # For self-hosted (Ollama)
```

| Provider | Model Example            | Notes                                    |
| -------- | ------------------------ | ---------------------------------------- |
| `openai` | `text-embedding-3-small` | Requires `OPENAI_API_KEY`                |
| `ollama` | `nomic-embed-text`       | Self-hosted, set `host` or `OLLAMA_HOST` |

## Storage

Knowledge data is stored under `<storage_path>/knowledge_db/<base_id>_<hash>/`. Each knowledge base gets its own ChromaDB collection named `mindroom_knowledge_<base_id>_<hash>`.

The storage path defaults to `mindroom_data/` next to your `config.yaml`, or can be set with `MINDROOM_STORAGE_PATH`.

## Dashboard Management

The web dashboard provides a Knowledge tab for managing knowledge bases without editing YAML:

- Create, edit, and delete knowledge bases
- Configure chunk size and overlap per knowledge base
- Configure Git sync settings
- Upload and remove files
- Trigger a full reindex on demand
- Monitor indexing status (file count vs. indexed count)
- Assign knowledge bases to agents from the Agents tab

## API Endpoints

See the [Dashboard API reference](https://docs.mindroom.chat/dashboard/#knowledge) for the full list of knowledge base endpoints (list, upload, delete, reindex, status).

## Hot Reload

Knowledge base configuration supports hot reload. When you change `config.yaml`:

- New knowledge bases are created and indexed
- Removed knowledge bases are stopped and cleaned up
- Changed settings (path, chunking, embedder, git config) trigger a re-initialization
- Unchanged knowledge bases continue running without interruption
- File watchers are preserved across reloads

# Memory System

MindRoom supports two memory backends:

- `mem0`: vector memory (semantic retrieval + extraction via Mem0)
- `file`: markdown memory files (`MEMORY.md` plus optional dated notes)

Set the global default backend with `memory.backend`. Override the backend per agent with `agents.<name>.memory_backend`. Set `agents.<name>.memory_file_path` to point an individual file-backed agent at a custom workspace directory.

OpenClaw compatibility uses this same backend selection; there is no separate OpenClaw-only memory engine.

Optional:

- `memory.team_reads_member_memory: true` allows team-context memory reads to include member agent scopes.

## Memory Scopes

| Scope | User ID Format               | Description                                |
| ----- | ---------------------------- | ------------------------------------------ |
| Agent | `agent_<name>`               | Agent preferences and durable user context |
| Room  | `room_<safe_room_id>`        | Shared room/project context                |
| Team  | `team_<agent1>+<agent2>+...` | Shared team conversation memory            |

Notes:

- Room IDs are sanitized (`:` -> `_`, `!` removed).
- Team IDs are sorted agent names joined by `+`.

## Backend: `mem0`

`mem0` keeps the existing behavior:

- semantic retrieval before response
- automatic extraction after turns
- storage in Chroma-backed Mem0 collections

Example:

```
memory:
  backend: mem0
  embedder:
    provider: openai
    config:
      model: text-embedding-3-small
```

## Backend: `file`

`file` keeps memory in markdown files and treats files as source-of-truth.

Example:

```
memory:
  backend: file
  file:
    path: ./mindroom_data/memory_files
    max_entrypoint_lines: 200
```

Per-agent override example:

```
memory:
  backend: mem0

agents:
  coder:
    display_name: Coder
    role: Write and review code
    memory_backend: file
    memory_file_path: ./openclaw_data
```

`memory_file_path` is resolved relative to `config.yaml`. When set, the agent uses that directory as its memory scope instead of `<storage_path>/memory_files/agent_<name>/`.

### File layout

Under `memory.file.path` (or `<storage_path>/memory_files` by default), MindRoom stores per-scope folders such as:

- `agent_<name>/MEMORY.md`
- `agent_<name>/memory/YYYY-MM-DD.md`
- `room_<safe_room_id>/MEMORY.md`
- `room_<safe_room_id>/memory/YYYY-MM-DD.md`
- `team_<sorted_members>/MEMORY.md`
- `team_<sorted_members>/memory/YYYY-MM-DD.md`

## File Auto-Flush Worker

When the effective backend is `file` for at least one agent, you can enable background auto-flush:

```
memory:
  backend: file
  auto_flush:
    enabled: true
    flush_interval_seconds: 1800
    idle_seconds: 120
    max_dirty_age_seconds: 600
    stale_ttl_seconds: 86400
    max_cross_session_reprioritize: 5
    batch:
      max_sessions_per_cycle: 10
      max_sessions_per_agent_per_cycle: 3
    extractor:
      no_reply_token: NO_REPLY
      max_messages_per_flush: 20
      max_chars_per_flush: 12000
      max_extraction_seconds: 30
```

High-level behavior:

1. Turns mark sessions dirty.
1. Background worker picks eligible dirty sessions in bounded batches.
1. Worker runs a model-driven extraction (not keyword heuristics) to produce durable memories.
1. If extractor returns `NO_REPLY`, nothing is written.
1. Successful writes append to memory files via normal memory APIs.

## UI Configuration

The Dashboard **Memory** page supports:

- backend selection (`mem0` vs `file`)
- team/member read toggle (`team_reads_member_memory`)
- embedder provider/model/host
- file backend settings (`path`, `max_entrypoint_lines`)
- auto-flush settings (intervals, idle/age thresholds, retries)
- batch sizing
- extractor settings (`no_reply_token`, message/char/time limits, memory-context bounds)

Save from the Memory page to persist changes to `config.yaml`. Use the Dashboard **Agents** page to set an agent-specific **Memory Backend** override and optional **Memory File Path**.

## Optional Memory Tool

For explicit agent-controlled memory operations, add the `memory` tool:

```
agents:
  assistant:
    tools: [memory]
```

This exposes `add_memory`, `search_memory`, `get_all_memories`, and `delete_all_memories`.

# Voice Messages

MindRoom can surface Matrix voice messages as attachment-aware prompts for agents. If STT is configured, MindRoom also transcribes the audio and routes it through the normal text pipeline. If STT is unavailable, disabled, or fails, the audio still remains available as an attachment and falls back to `🎤 [Attached voice message]`.

## Overview

When a voice message is received:

1. The audio event is handled through the shared media pipeline.
1. Audio is downloaded and decrypted, if needed, and registered as a context-scoped attachment.
1. If STT is configured and succeeds, the audio is transcribed and lightly normalized for mentions and commands.
1. If STT is unavailable, disabled, or fails, MindRoom falls back to `🎤 [Attached voice message]`.
1. The normalized text plus attachment metadata is dispatched using the normal routing and thread logic.
1. If routing is ambiguous in a multi-agent room, the router posts a visible handoff message.
1. If `voice.visible_router_echo` is enabled and the router is present and allowed to reply, the router also posts the normalized voice text as a display-only message.
1. Otherwise, no extra router message is posted and the chosen agent replies directly.
1. The responding agent receives the original audio attachment alongside the normalized prompt.

## Configuration

Enable STT and voice-intelligence formatting in `config.yaml`:

```
voice:
  enabled: true
  visible_router_echo: false
  stt:
    provider: openai
    model: whisper-1
    # Optional: custom endpoint (without /v1 suffix)
    # host: http://localhost:8080
  intelligence:
    model: default  # Model used for command recognition
```

Or use the dashboard's Voice tab.

With `voice.enabled: false`, audio messages are still surfaced as attachments with the fallback prompt. Enabling voice adds STT and command-recognition on top of that attachment flow. With `voice.visible_router_echo: true`, the router also posts the normalized transcript or fallback text for inspection when it is present in the room and allowed to reply.

## STT Providers

MindRoom uses the OpenAI-compatible transcription API. Any service that implements the `/v1/audio/transcriptions` endpoint will work.

### OpenAI Whisper (Cloud)

```
voice:
  enabled: true
  stt:
    provider: openai
    model: whisper-1
```

Requires `OPENAI_API_KEY` environment variable.

### Self-Hosted Whisper

```
voice:
  enabled: true
  stt:
    provider: openai
    model: whisper-1
    host: http://localhost:8080
```

Note: Do not include `/v1` in the host URL - MindRoom appends `/v1/audio/transcriptions` automatically.

Use with [faster-whisper-server](https://github.com/fedirz/faster-whisper-server) or similar OpenAI-compatible STT servers.

### Custom API Key

For self-hosted solutions that require authentication:

```
voice:
  enabled: true
  stt:
    provider: openai
    model: whisper-1
    host: http://localhost:8080
    api_key: your-custom-api-key
```

If `api_key` is not set, MindRoom falls back to the `OPENAI_API_KEY` environment variable.

## Command Recognition

The intelligence component uses an AI model to analyze transcriptions and format them properly:

1. **Agent mentions** - Converts spoken agent names to `@agent` format
1. **Command patterns** - Identifies and formats `!command` syntax
1. **Smart formatting** - Handles speech recognition errors and natural language variations

### Intelligence Model

The intelligence model processes raw transcriptions to recognize commands and agent names:

```
voice:
  intelligence:
    model: default  # Uses the default model from your models config
```

You can specify a different model for faster or more accurate command recognition.

## How It Works

```
┌─────────────┐     ┌─────────────┐     ┌─────────────┐     ┌─────────────┐
│ Voice Msg   │────▶│ Download &  │────▶│ Transcribe  │────▶│ Format with │
│ (Audio)     │     │ Decrypt     │     │ (STT)       │     │ AI (LLM)    │
└─────────────┘     └─────────────┘     └─────────────┘     └─────────────┘
                                                                  │
                                                                  ▼
                                                         ┌──────────────────┐
                                                         │ Normal Dispatch  │
                                                         │ Decision         │
                                                         └──────────────────┘
                                                           │            │
                                                           │            │
                                                           ▼            ▼
                                                 ┌──────────────┐  ┌──────────────┐
                                                 │ Visible      │  │ No Visible   │
                                                 │ Router       │  │ Router       │
                                                 │ Handoff      │  │ Handoff      │
                                                 └──────────────┘  └──────────────┘
                                                           │            │
                                                           └──────┬─────┘
                                                                  ▼
                                                           ┌─────────────┐
                                                           │ Agent       │
                                                           │ Responds    │
                                                           └─────────────┘
```

## Dispatch Behavior

### Single-agent rooms or explicitly targeted audio

If only one eligible agent is visible, that agent responds directly to the normalized audio event. If the audio caption or transcript explicitly mentions an agent, that targeted agent responds directly as well. In these cases, the router does not post an extra visible routing handoff. The transcript or fallback text is used internally for dispatch, not echoed to the room as a separate message. If `voice.visible_router_echo` is enabled, the router still posts a display-only copy of the normalized voice text, but agents ignore that echo and continue responding to the original audio event.

### Multi-agent rooms where the router must choose

If multiple agents are available and the audio does not already target one of them, the router uses the normalized text to do the usual routing step. The router then posts a normal handoff message such as `@home could you help with this?`. The selected agent responds to that router handoff, and the handoff carries the original audio attachment metadata forward. This is the case where a visible router message appears. If `voice.visible_router_echo` is also enabled, the router first posts the normalized voice text as a display-only echo and then posts the normal handoff.

### No router, or router cannot reply

Audio still works when the router is absent. In that case, agents handle the normalized audio directly using the same mention, thread, and permission rules as normal text messages. The same direct handling also applies when the router is present but is not allowed to reply to the original sender. In these cases, there is no visible router echo because the router does not handle the event. If multiple eligible agents remain and the audio does not already target one of them, there is no automatic handoff until the user mentions an agent.

### Visibility rule

MindRoom does not automatically post the transcript to the room. A visible router message appears only when the router must disambiguate between multiple eligible responders. If the responder is already clear from room shape, thread context, or explicit targeting, the chosen agent replies directly without an extra router message. Setting `voice.visible_router_echo: true` adds a visible router-authored echo of the normalized voice text when the router is actually allowed to process the event, without changing which event agents actually answer.

### Attachment access

The original audio is always registered as a context-scoped attachment before dispatch continues. That means the responding agent can inspect the file directly, use audio-capable models, or fetch it later with the `attachments` tool. This is true whether the prompt came from a transcript, a fallback message, or a router handoff.

## Matrix Integration

Voice messages in Matrix are:

- Detected as `RoomMessageAudio` or `RoomEncryptedAudio` events
- Downloaded from the Matrix media server
- Decrypted if end-to-end encrypted (using the encryption key from the event)
- Registered as audio attachments before dispatch
- Sent to the STT service via the OpenAI-compatible API when transcription is enabled
- Normalized once per room and thread context, even though multiple bots may observe the event

Audio callbacks are registered on all bots because audio now follows the shared media pipeline. Shared normalization prevents repeated download and STT work for the same event. Reply-permission checks still use the original human sender, not a later router relay.

## Environment Variables

| Variable         | Description                                                          |
| ---------------- | -------------------------------------------------------------------- |
| `OPENAI_API_KEY` | For OpenAI Whisper API (used as fallback if no `api_key` configured) |

## Text-to-Speech Tools

MindRoom also supports text-to-speech (TTS) through agent tools. These are separate from voice message transcription and allow agents to generate audio responses:

- **OpenAI** - Speech synthesis via `openai` tool
- **ElevenLabs** - High-quality AI voices and sound effects via `eleven_labs` tool
- **Cartesia** - Voice AI with optional voice localization via `cartesia` tool
- **Groq** - Fast speech generation via `groq` tool

See the [Tools documentation](https://docs.mindroom.chat/tools/index.md) for configuration details.

## Voice Fallback (No STT Available)

When STT is unavailable, disabled, or transcription fails, MindRoom falls back to raw audio passthrough:

1. The voice message audio is downloaded and saved locally as an attachment
1. The normalized text becomes `🎤 [Attached voice message]`
1. The raw audio is registered as an attachment ID available to agents in the room or thread context
1. When an agent responds, it automatically receives the raw audio as an Agno `Audio` object

This means voice messages still reach agents even without STT. Agents with audio-capable models can process the raw audio directly, and tool-using agents can retrieve the file by attachment ID. Attachment IDs in this fallback path use the same context-scoping rules described in [File & Video Attachments](https://docs.mindroom.chat/attachments/index.md).

## Limitations

- Only OpenAI-compatible STT APIs are supported
- Audio quality and background noise affect transcription accuracy
- Without STT, routing has less textual context, so explicit `@mentions` or existing thread context are more reliable in multi-agent rooms
- Without STT, agents receive raw audio instead of transcription, so the model or tools must support audio inputs to process it

## Tips

- **Say the agent name first** - "Hey @assistant, what's the weather?"
- **Use display names** - The AI converts spoken names like "HomeAssistant" to the correct `@home` mention

# Image Messages

MindRoom can process images sent to Matrix rooms, passing them to vision-capable AI models for analysis.

## Overview

When a user sends an image in a Matrix room:

1. The agent determines whether it should respond (via mention, thread participation, or DM)
1. The image is downloaded and decrypted (if E2E encrypted)
1. The image is wrapped as an `agno.media.Image` and passed to the AI model
1. The agent responds with its analysis

Image support works automatically for all agents -- no configuration is needed. The AI model must support vision (e.g., Claude, GPT-4o).

## How It Works

```
┌─────────────┐     ┌─────────────┐     ┌─────────────┐
│ Image Msg   │────>│ Download &  │────>│ Pass to AI  │
│ (Matrix)    │     │ Decrypt     │     │ Model       │
└─────────────┘     └─────────────┘     └─────────────┘
                                              │
                                              v
                                        ┌─────────────┐
                                        │ Agent       │
                                        │ Responds    │
                                        └─────────────┘
```

## Usage

Send an image in a Matrix room and mention the agent in the caption:

- **With caption**: `@assistant What does this diagram show?` -- the caption is used as the prompt
- **Without caption**: The agent receives `[Attached image]` as the prompt and describes what it sees
- **Bare filename**: If the body is just a filename (e.g., `IMG_1234.jpg`), it is treated the same as no caption

Images work in both direct messages and threads, and with both individual agents and teams.

## Encryption

Both unencrypted and E2E encrypted images are supported. Encrypted images are decrypted transparently using the key material from the Matrix event.

## Caching

AI response caching is automatically skipped when images are present, since image payloads are large and unlikely to repeat.

## Limitations

- **Routing in multi-agent rooms** -- in multi-agent rooms without an `@mention`, the router selects the best agent based on the image caption.
- **Bridge mention detection** uses `m.mentions` in the event, falling back to parsing HTML pills from `formatted_body` when `m.mentions` is absent (e.g., mautrix-telegram). Bridges that set neither may not trigger agent responses.
- **Model support** -- the configured model must support vision. Text-only models will ignore the image or return an error.

# File & Video Attachments

MindRoom can process files and videos sent to Matrix rooms, passing them to agents for analysis or action.

## Overview

When a user sends a file or video in a Matrix room:

1. The agent determines whether it should respond (via mention, thread participation, or DM)
1. The media is downloaded and decrypted (if E2E encrypted)
1. The file is saved locally and registered as a context-scoped attachment
1. The agent receives the media as an Agno `File` or `Video` object plus an attachment ID it can reference in tool calls
1. The agent responds with its analysis or takes action on the file

File and video support works automatically for all agents -- no configuration is needed.

## How It Works

```
┌─────────────┐     ┌─────────────┐     ┌─────────────┐     ┌─────────────┐
│ File/Video  │────>│ Download &  │────>│ Register    │────>│ Pass to AI  │
│ (Matrix)    │     │ Decrypt     │     │ Attachment  │     │ Model       │
└─────────────┘     └─────────────┘     └─────────────┘     └─────────────┘
                                                                  │
                                                                  v
                                                            ┌─────────────┐
                                                            │ Agent       │
                                                            │ Responds    │
                                                            └─────────────┘
```

## Usage

Send a file or video in a Matrix room and mention the agent in the caption:

- **With caption**: `@assistant Summarize this document` -- the caption is used as the prompt
- **Without caption**: The agent receives `[Attached file]` or `[Attached video]` as the prompt
- **Bare filename**: If the body is just the filename (e.g., `report.pdf`), it is treated the same as no caption

Files and videos work in both direct messages and threads, and with both individual agents and teams.

## Attachment IDs

Each uploaded file or video is assigned a stable attachment ID (e.g., `att_abc123`). The agent's prompt is augmented with the available IDs:

```
Available attachment IDs: att_abc123. Use tool calls to inspect or process them.
```

Attachment IDs are **context-scoped** -- an attachment registered in one room or thread is not accessible from another. This prevents cross-room data leakage for ID-based access. Voice raw-audio fallback uses the same attachment ID mechanism; see [Voice Fallback](https://docs.mindroom.chat/voice/#voice-fallback-no-stt-available).

## The `attachments` Tool

Agents can use the optional `attachments` tool to interact with context-scoped attachments programmatically.

### Enabling

Add `attachments` to the agent's tool list:

```
agents:
  assistant:
    tools:
      - attachments
```

### Operations

| Operation                        | Description                                                                                      |
| -------------------------------- | ------------------------------------------------------------------------------------------------ |
| `list_attachments(target?)`      | List metadata for attachments in the current context (ID, local_path, filename, MIME type, size) |
| `get_attachment(attachment_id)`  | Return one context attachment record, including its local file path                              |
| `register_attachment(file_path)` | Register a local file path as a context attachment ID (`att_*`)                                  |

`attachment_ids` accepts only context attachment IDs (`att_*`). `attachment_file_paths` accepts local file paths and auto-registers them in the current context before sending. Use `matrix_message(action="send"|"reply"|"thread-reply", attachment_ids=..., attachment_file_paths=...)` to send attachments.

### Why use this tool?

Not all AI models support direct file inputs. The `attachments` tool lets any model work with files by calling tools that operate on attachment IDs, even if the model itself cannot ingest the raw bytes.

## Encryption

Both unencrypted and E2E encrypted files and videos are supported. Encrypted media is decrypted transparently using the key material from the Matrix event.

## Caching

AI response caching is automatically skipped when files, videos, or audio are present, since media payloads are large and unlikely to repeat.

## Retention

MindRoom automatically prunes attachment metadata and managed `incoming_media/` files older than 30 days. Pruning runs opportunistically during new attachment registration.

## Limitations

- **Routing in multi-agent rooms** -- in multi-agent rooms without an `@mention`, the router selects the best agent based on the file caption.
- **Model support** -- the configured model must support file or video inputs for direct analysis. Models that do not can still use the `attachments` tool to inspect and process files via tool calls.

# Scheduling

Schedule agents to perform tasks at specific times or intervals using natural language. Tasks run in the thread where they were created.

## Commands

### Schedule a Task

```
!schedule <natural-language-request>
```

**One-Time Tasks:**

```
!schedule in 5 minutes Check the deployment
!schedule tomorrow at 3pm Send the weekly report
```

**Recurring Tasks:**

```
!schedule Every hour, @shell check server status
!schedule Daily at 9am, @finance market report
!schedule Weekly on Friday, @analyst prepare weekly summary
```

**Event-Driven Workflows:**

Conditional requests are converted to polling schedules:

```
!schedule If I get an email about "urgent", @phone_agent call me
!schedule When Bitcoin drops below $40k, @crypto_agent notify me
```

### Edit a Schedule

```
!edit_schedule <task-id> <new-task-description>
```

Edits an existing scheduled task by ID. The task description is re-parsed to update timing and content.

### List and Cancel Schedules

```
!list_schedules                  # Show pending tasks
!cancel_schedule <task-id>       # Cancel specific task
!cancel_schedule all             # Cancel all tasks in room
```

Aliases: `!listschedules`, `!list-schedules`, `!cancelschedule`, `!cancel-schedule`, `!editschedule`, `!edit-schedule`

## Agent Mentions

Include `@agent_name` in your schedule to have specific agents respond. The scheduler validates that mentioned agents are available in the room before creating the task.

## Timezone

Schedules use the timezone from `config.yaml` (defaults to UTC):

```
timezone: America/Los_Angeles
```

## Persistence

Schedules are stored in Matrix room state and persist across restarts. Past one-time tasks are automatically skipped during restoration.

# Authorization

MindRoom controls which Matrix users can interact with agents.

Room access (joinability/discoverability) is configured separately through `matrix_room_access`.

## Configuration

Configure authorization in `config.yaml`:

```
authorization:
  # Users with access to all rooms
  global_users:
    - "@admin:example.com"
    - "@developer:example.com"

  # Room-specific permissions (room ID, full alias, or managed room key)
  room_permissions:
    "!abc123:example.com":
      - "@user1:example.com"
      - "@user2:example.com"
    "#lobby:example.com":
      - "@user3:example.com"
    "ops":
      - "@user4:example.com"

  # Default for rooms not in room_permissions
  default_room_access: false

  # Optional: per-agent/team/router reply allowlists
  # Keys must match an agent name, team name, "router", or "*"
  # Values are canonical Matrix user IDs or glob patterns (aliases are resolved)
  # Examples: "*:example.com", "@admin:*", "*"
  agent_reply_permissions:
    "*":
      - "@admin:example.com"
    code:
      - "@admin:example.com"
    research:
      - "@developer:example.com"
    router:
      - "*"

# Optional: configure the internal MindRoom user identity
mindroom_user:
  username: mindroom_user          # Set before first startup (cannot be changed later)
  display_name: MindRoomUser

# Optional: room onboarding/discoverability policy
matrix_room_access:
  mode: single_user_private        # default
  multi_user_join_rule: public     # public or knock (multi_user only)
  publish_to_room_directory: false # publish managed rooms to public directory
  invite_only_rooms: []            # room keys/aliases/IDs that stay restricted
  reconcile_existing_rooms: false  # migrate existing managed rooms when true
```

**Defaults** (when `authorization` block is omitted):

- `global_users: []`
- `room_permissions: {}`
- `default_room_access: false`
- `agent_reply_permissions: {}`

This means only MindRoom system users (agents, teams, router, and the configured internal user, default `@mindroom_user`) can interact with agents by default.

`mindroom_user.username` is a one-time setting used to create the internal Matrix account. After the account exists, keep the same username and only change `mindroom_user.display_name` for visible name changes.

For `authorization.room_permissions`, MindRoom accepts these key formats:

- Room ID: `!roomid:example.com`
- Full room alias: `#alias:example.com`
- Managed room key: `alias` (the configured room name/key used by MindRoom)

## Matrix Room Onboarding for OIDC Users

When users authenticate through Synapse OIDC, they are regular Matrix users. To let them join managed MindRoom rooms by alias without manual invites:

1. Set `matrix_room_access.mode: multi_user`.
1. Set `multi_user_join_rule` to `public` (direct join) or `knock` (request access).
1. Set `publish_to_room_directory: true` if rooms should appear in Explore/public room directory.

If you keep `mode: single_user_private` (default), managed rooms remain invite-only and private in the directory.

### Required Service Account Permissions

MindRoom applies room join rules and directory visibility using its managing account (typically the router account, e.g. `@mindroom_router:<domain>`).

- The managing account must be joined to the room.
- The managing account must have enough power to send `m.room.join_rules`.
- To publish to the room directory, Synapse requires moderator/admin-level power in that room.

If permissions are insufficient, MindRoom logs actionable warnings including the Matrix API error and required permission hint.

## Migration Guide (Existing Deployments)

Use this opt-in migration flow to move existing managed rooms to multi-user onboarding safely:

1. Update config:
1. `matrix_room_access.mode: multi_user`
1. choose `multi_user_join_rule`
1. set `publish_to_room_directory` as needed
1. optionally list restricted rooms in `invite_only_rooms`
1. Enable reconciliation once:
1. `matrix_room_access.reconcile_existing_rooms: true`
1. Restart MindRoom and verify logs for each managed room.
1. After migration is complete, set `reconcile_existing_rooms: false` again (recommended steady state).

Only managed rooms (rooms configured through MindRoom agents/teams) are reconciled.

## Matrix ID Format

User IDs follow the Matrix format: `@localpart:homeserver.domain`

Examples: `@alice:matrix.org`, `@bob:example.com`, `@admin:company.internal`

## Authorization Flow

Authorization checks are performed in order:

1. **Internal system user** - `@{mindroom_user.username}:{domain}` is always authorized (default: `@mindroom_user:{domain}`). Note: that user ID from a different domain is NOT authorized.
1. **MindRoom agents/teams/router** - Configured agents, teams, and the router are authorized
1. **Alias resolution** - If the sender matches a bridge alias in `aliases`, it is resolved to the canonical user ID for the remaining checks
1. **Global users** - Users in `global_users` have access to all rooms
1. **Room permissions** - If any matching room identifier exists in `room_permissions` (room ID, full alias, or managed room key), user must be in that list (does NOT fall through to `default_room_access`)
1. **Default access** - Rooms not in `room_permissions` use `default_room_access`

> [!TIP] Set `default_room_access: false` and explicitly grant access via `global_users` or `room_permissions` for better security.

## Bridge Aliases

When using Matrix bridges (e.g., mautrix-telegram, mautrix-signal), messages from the bridged platform arrive with a different Matrix user ID. Use `aliases` to map these bridge-created IDs to a canonical user so they inherit the same permissions:

```
authorization:
  global_users:
    - "@alice:example.com"
  room_permissions:
    "!room1:example.com":
      - "@bob:example.com"
  aliases:
    "@alice:example.com":
      - "@telegram_123:example.com"
      - "@signal_456:example.com"
    "@bob:example.com":
      - "@telegram_789:example.com"
```

In this example, messages from `@telegram_123:example.com` are treated as `@alice:example.com` (global access), and messages from `@telegram_789:example.com` are treated as `@bob:example.com` (access to `!room1:example.com` only).

## Per-Agent Reply Permissions

Use `authorization.agent_reply_permissions` to restrict which users each agent can reply to.

- The map key is an entity name: agent name, team name, `router`, or `*`.
- The `*` key is a default rule for entities that do not have an explicit entry.
- The value is a list of allowed Matrix user IDs.
- Values support glob-style matching (for example `*:example.com`).
- A `*` user entry means "allow any sender" for that specific entity.
- If an entity is not present in the map, it has no extra reply restriction.
- Alias mapping from `authorization.aliases` is applied before matching, so bridged IDs inherit canonical user permissions.
- Internal MindRoom identities (agents, teams, router, and the internal `mindroom_user`) always bypass reply permissions — they are system participants, not end users.
- `bot_accounts` are **not** exempt. Bridge bots listed in `bot_accounts` are still subject to reply permission checks.
- Keys that do not match any configured agent, team, `router`, or `*` are rejected at config load time.
- For voice messages, the permission check uses the original human sender, not the router that posted the transcription.

```
authorization:
  global_users:
    - "@alice:example.com"
    - "@bob:example.com"
  aliases:
    "@alice:example.com":
      - "@telegram_111:example.com"
  agent_reply_permissions:
    "*":
      - "@alice:example.com"
    code:
      - "@alice:example.com"
    research:
      - "@bob:example.com"
    router:
      - "*"
```

In this example, `*` restricts all entities to Alice by default, `research` overrides that and replies only to Bob, and `router` can reply to anyone.

## Bot Accounts

The `bot_accounts` field is a **top-level** config option (not under `authorization:`). It lists Matrix user IDs of non-MindRoom bots — such as bridge bots for Telegram, Slack, or other platforms — that should be treated like agents for response logic. Bots in this list won't trigger the multi-human-thread mention requirement.

```
# Top-level config, not under authorization:
bot_accounts:
  - "@telegram_bot:example.com"
  - "@slack_bot:example.com"
```

For more details on how `bot_accounts` affects routing behavior, see the [Router configuration](https://docs.mindroom.chat/configuration/router/index.md) page.

# OpenAI-Compatible API

MindRoom exposes an OpenAI-compatible chat completions API so any chat frontend can use MindRoom agents as selectable "models". LibreChat, Open WebUI, LobeChat, ChatBox, BoltAI, and anything else that speaks the OpenAI protocol works out of the box.

## How It Works

The frontend calls `GET /v1/models` and sees your agents in the model picker. The user picks an agent and chats. The frontend sends standard OpenAI requests; MindRoom routes them to the selected agent with all its tools, instructions, and memory. The frontend doesn't know it's talking to an agent — it's transparent.

```
Chat Frontend (LibreChat, Open WebUI, etc.)
│
│  GET  /v1/models           → returns your agents as "models"
│  POST /v1/chat/completions → routes to the selected agent
│
└──→ MindRoom API ──→ ai_response() / stream_agent_response()
                         │
                         └──→ agents, tools, memory, knowledge bases
```

No Matrix auth dependency. You can run the OpenAI-compatible API standalone or alongside the Matrix bot.

## Setup

### 1. Set API keys

Add to your `.env`:

```
# Option A: Set API keys (recommended for production)
OPENAI_COMPAT_API_KEYS=sk-my-secret-key-1,sk-my-secret-key-2

# Option B: Allow unauthenticated access (local dev only)
OPENAI_COMPAT_ALLOW_UNAUTHENTICATED=true
```

Without either of these, the API returns 401 on all requests.

### 2. Start MindRoom

```
# Full MindRoom runtime (Matrix bot + API server + dashboard)
uv run mindroom run

# Or via just
just start-mindroom-dev
```

The API is available at `http://localhost:8765/v1/`.

> [!IMPORTANT] If the dashboard and `/v1/*` share a domain behind a reverse proxy, route `/v1/*` to the MindRoom runtime (in addition to `/api/*`). Otherwise OpenAI-compatible requests can be handled by the dashboard and fail.

### 3. Verify

```
# List available agents
curl -H "Authorization: Bearer sk-my-secret-key-1" \
  http://localhost:8765/v1/models

# Chat (non-streaming)
curl -H "Authorization: Bearer sk-my-secret-key-1" \
  -H "Content-Type: application/json" \
  -d '{"model":"general","messages":[{"role":"user","content":"Hello"}]}' \
  http://localhost:8765/v1/chat/completions

# Chat (streaming)
curl -N -H "Authorization: Bearer sk-my-secret-key-1" \
  -H "Content-Type: application/json" \
  -d '{"model":"general","messages":[{"role":"user","content":"Hello"}],"stream":true}' \
  http://localhost:8765/v1/chat/completions
```

## Client Configuration

### LibreChat

Add to your `librechat.yaml`:

```
endpoints:
  custom:
    - name: "MindRoom"
      apiKey: "${MINDROOM_API_KEY}"
      baseURL: "http://localhost:8765/v1"
      models:
        default: ["general"]
        fetch: true
      modelDisplayLabel: "MindRoom"
      titleConvo: true
      titleModel: "general"
      dropParams: ["stop", "frequency_penalty", "presence_penalty", "top_p"]
      headers:
        # Highest-priority session key used by MindRoom
        X-Session-Id: "{{LIBRECHAT_BODY_CONVERSATIONID}}"
        # Backward-compatible fallback used by MindRoom
        X-LibreChat-Conversation-Id: "{{LIBRECHAT_BODY_CONVERSATIONID}}"
```

`X-Session-Id` is recommended when you want deterministic MindRoom session continuity. This is especially important for tools that keep long-lived sessions inside the MindRoom runtime. `X-LibreChat-Conversation-Id` alone is still enough to keep continuity if you already use it.

### Open WebUI

1. Go to **Admin Settings > Connections > OpenAI > Manage**
1. Set API URL to `http://localhost:8765/v1`
1. Set API Key to one of your `OPENAI_COMPAT_API_KEYS`
1. Agents appear automatically in the model picker

### Any OpenAI-compatible client

Point the base URL at `http://localhost:8765/v1` and set the API key. MindRoom implements the OpenAI-compatible `GET /v1/models` and `POST /v1/chat/completions` endpoints.

## Features

### Model selection

Each agent in `config.yaml` appears as a selectable model. The model ID is the agent's internal name (e.g., `code`, `research`), and the display name comes from `display_name`.

### Auto-routing

Select the `auto` model to let MindRoom's router pick the best agent for each message, the same routing logic used in Matrix rooms.

### Teams

Teams are exposed as `team/<team_name>` models. Selecting `team/super_team` runs the full team collaboration or coordination workflow.

### Streaming

`stream: true` returns Server-Sent Events in the standard OpenAI format: role chunk, content chunks, finish chunk, `[DONE]`.

Tool calls appear inline as text in the stream (not as native OpenAI `tool_calls` deltas). MindRoom currently emits tool events in stream chunks as inline `<tool id="N" state="start|done">...</tool>` content.

### Session continuity

Session IDs are derived from request headers:

1. `X-Session-Id` header (explicit control)
1. `X-LibreChat-Conversation-Id` header (automatic with LibreChat)
1. Random UUID fallback

Agent memory and conversation history persist across requests with the same session ID. For persistent MindRoom tool sessions (for example a long-running coding session), prefer `X-Session-Id`.

### Claude Agent tool sessions

If an agent enables the `claude_agent` tool, the same `X-Session-Id` keeps the Claude session alive across turns. This lets a user continue one long coding flow instead of starting a fresh Claude process on every request. See [Claude Agent Sessions](https://docs.mindroom.chat/tools/builtin/#claude-agent-sessions) for configuration details.

Parallel Claude sub-sessions are supported by using different `session_label` values in tool calls:

- Same `session_label`: one shared Claude session (serialized by a per-session lock)
- Different `session_label`: independent Claude sessions that can run concurrently

### Knowledge bases

Agents with configured `knowledge_bases` in `config.yaml` get RAG support automatically. No additional API configuration needed. For Git-backed knowledge bases, API-only deployments auto-clone/sync/index on manager initialization.

## What's ignored

The API accepts but ignores these OpenAI parameters (the agent's own config controls them):

- `temperature`, `top_p`, `max_tokens`, `max_completion_tokens`
- `tools`, `tool_choice` (agents use their configured tools)
- `stop`, `frequency_penalty`, `presence_penalty`, `seed`
- `response_format`, `logprobs`, `logit_bias`
- `stream_options` (usage stats are always zeros)

Client `system` / `developer` messages are prepended to the prompt. They augment the agent's built-in instructions, not replace them.

## Authentication

| `OPENAI_COMPAT_API_KEYS` | `OPENAI_COMPAT_ALLOW_UNAUTHENTICATED` | Behavior                                                          |
| ------------------------ | ------------------------------------- | ----------------------------------------------------------------- |
| Set                      | (any)                                 | Bearer token required, must match one of the comma-separated keys |
| Unset                    | `true`                                | No authentication required                                        |
| Unset                    | Unset/`false`                         | All requests return 401 (locked)                                  |

The OpenAI-compatible API uses its own auth (`OPENAI_COMPAT_API_KEYS`), separate from the dashboard API auth. In standalone mode, the dashboard `/api/*` endpoints can be protected with `MINDROOM_API_KEY`; the browser dashboard uses a same-origin auth cookie, while CLI and curl clients can still send `Authorization: Bearer ...`. These are independent: `MINDROOM_API_KEY` secures the dashboard, while `OPENAI_COMPAT_API_KEYS` secures the `/v1/*` chat completions endpoints.

## Limitations

- **Token usage is always zeros** — Agno doesn't expose token counts
- **No native `tool_calls` format** — tool results appear inline in content text
- **`show_tool_calls` config is Matrix-only today** — OpenAI-compatible `/v1/chat/completions` currently includes tool-call text/events regardless of `show_tool_calls: false`
- **No room memory** — only agent-scoped memory (no `room_id` in API requests)
- **Scheduler tool unavailable** — scheduling requires Matrix context and returns an error message when no Matrix scheduling context is available

# Architecture

MindRoom's architecture consists of several key components working together.

## Overview

```
┌─────────────────────────────────────────────────────────┐
│                   Matrix Homeserver                      │
│              (Synapse, Conduit, etc.)                    │
└──────────────────────┬──────────────────────────────────┘
                       │
┌──────────────────────▼──────────────────────────────────┐
│              MultiAgentOrchestrator                      │
│  ┌─────────────────────────────────────────────────┐    │
│  │                   Matrix Client                  │    │
│  │         (nio, sync loops, presence)             │    │
│  └─────────────────────────────────────────────────┘    │
│                                                          │
│  ┌─────────┐  ┌─────────┐  ┌─────────┐  ┌─────────┐    │
│  │ Router  │  │ Agent 1 │  │ Agent 2 │  │  Team   │    │
│  └────┬────┘  └────┬────┘  └────┬────┘  └────┬────┘    │
│       │            │            │            │          │
│  ┌────▼────────────▼────────────▼────────────▼────┐    │
│  │              Agno Runtime                       │    │
│  │         (LLM calls, tool execution)            │    │
│  └─────────────────────────────────────────────────┘    │
│                                                          │
│  ┌─────────────────────────────────────────────────┐    │
│  │                Memory System                     │    │
│  │  (Mem0 + ChromaDB, agent/room/team scopes)      │    │
│  └─────────────────────────────────────────────────┘    │
└─────────────────────────────────────────────────────────┘
```

## Components

- [Matrix Integration](https://docs.mindroom.chat/architecture/matrix/index.md) - How MindRoom connects to Matrix
- [Agent Orchestration](https://docs.mindroom.chat/architecture/orchestration/index.md) - How agents are managed

## Data Flow

1. **Message arrives** from Matrix homeserver
1. **Router decides** which agent should handle it (if no explicit mention)
1. **Agent processes** the message using the Agno runtime
1. **Tools execute** as needed (file operations, API calls, etc.)
1. **Response sent** back to Matrix room
1. **Memory updates** asynchronously in background

# Matrix Integration

MindRoom uses the Matrix protocol for all agent communication. The integration is implemented in `src/mindroom/matrix/`.

## Why Matrix?

- **Federated** - Connect to any Matrix homeserver
- **Bridgeable** - Bridge to Discord, Slack, Telegram, and more
- **Open** - Open standard and open-source implementations
- **End-to-End Encryption** - Secure communication with encrypted room support

## Matrix Client

MindRoom uses `matrix-nio` for Matrix communication with SSL context handling and encryption key storage.

### Environment Variables

| Variable             | Default                 | Description                              |
| -------------------- | ----------------------- | ---------------------------------------- |
| `MATRIX_HOMESERVER`  | `http://localhost:8008` | Matrix homeserver URL                    |
| `MATRIX_SERVER_NAME` | (from homeserver)       | Federation server name                   |
| `MATRIX_SSL_VERIFY`  | `true`                  | Set to `false` for dev/self-signed certs |

Streaming behavior is configured in `config.yaml` with `defaults.enable_streaming` (default: `true`).

## Agent Users

Each agent gets its own Matrix user with the `mindroom_` prefix:

```
@mindroom_assistant:example.com
@mindroom_router:example.com  (built-in routing agent)
```

Users are automatically created during orchestrator startup and credentials are persisted in `mindroom_data/matrix_state.yaml`.

## Room Management

Agents can join existing rooms, create new rooms with AI-generated topics, respond to invites automatically, leave unconfigured rooms, and set room avatars.

Rooms are auto-created via `ensure_room_exists()` and `ensure_all_rooms_exist()`. DM rooms can be detected with `is_dm_room(client, room_id)`.

## Threading (MSC3440)

MindRoom emits thread replies following [MSC3440](https://github.com/matrix-org/matrix-spec-proposals/blob/main/proposals/3440-threading-via-relations.md), using `m.relates_to` with `rel_type: m.thread`.

For clients that send plain replies without thread metadata (`m.in_reply_to` but no `rel_type: m.thread`), MindRoom resolves the reply chain to the existing thread root and continues the same conversation.

### Resolution Rules

When deriving context for a non-thread client reply, MindRoom:

1. Traverses `m.in_reply_to` backwards until it finds a root, a known thread root, a cycle, or the traversal limit.
1. Uses cycle detection and a bounded traversal limit (`ReplyChainCaches.traversal_limit`) to avoid runaway chains.
1. If the chain points to a real thread root, fetches thread history and merges chain history so plain replies are preserved in order.
1. If no thread relation exists, treats the reply chain itself as the conversation context root.
1. Falls back to the oldest successfully resolved event when traversal is interrupted by fetch failures or limits.

```
├── User: @assistant help with this code
│   ├── Assistant: I can help! Let me look at it...
│   ├── User: It should return a list
│   └── Assistant: Here's the updated version...
```

Use `build_message_content()` from `message_builder.py` to construct thread-aware messages, and `EventInfo.from_event()` to analyze event relations (threads, edits, replies, reactions).

## Message Flow

### Sync Loop

Each agent bot runs its own sync loop with 30-second long-polling timeout. Sync loops are wrapped with `_sync_forever_with_restart()` for automatic restart on connection failures.

Events are processed in background tasks:

1. Sync receives event via long-polling
1. Event callback triggered (`_on_message`, `_on_invite`, etc.)
1. Background task created for async processing
1. Agent responds in thread

### Streaming Responses

Agents stream responses by progressively editing messages. Streaming is enabled only when the requesting user is online (checked via `should_use_streaming()`), saving API calls for offline users.

Tool call telemetry is emitted as plain inline markers and mirrored in `io.mindroom.tool_trace` metadata on the same message content.

Marker format:

```
Pending:   🔧 `tool_name` [N] ⏳
Completed: 🔧 `tool_name` [N]
```

Where `N` is 1-indexed per message and maps to `io.mindroom.tool_trace.events[N-1]`.

## Presence

Agents set their Matrix presence with status messages containing model and role information (e.g., "🤖 Model: anthropic/claude-sonnet-4-5-latest | 💼 Code assistant | 🔧 5 tools available").

**Presence States:**

- **online** - Agent running and ready
- **unavailable** - Agent idle but connected (treated as online for streaming)
- **offline** - Agent stopped or disconnected

## Typing Indicators

Agents show typing indicators while processing via `typing_indicator()` context manager. The indicator auto-refreshes at `min(timeout/2, 15)` seconds to remain visible during long operations.

## Mentions

Mentions are parsed via `format_message_with_mentions()` which handles multiple formats:

- `@calculator` - Short agent name
- `@mindroom_calculator` - Full username
- `@mindroom_calculator:localhost` - Full Matrix ID

Returns content with `m.mentions` and `formatted_body` containing clickable links.

## Large Messages

Messages exceeding the 64KB Matrix event limit are automatically handled by `prepare_large_message()`:

- Messages > 55,000 bytes and edits > 27,000 bytes use a fallback event
- Full original Matrix message content is uploaded as a JSON sidecar (`message-content.json`)
- Preview text included in message body (maximum that fits)
- Custom metadata (`io.mindroom.long_text.version = 2`) points to sidecar encoding (`matrix_event_content_json`)
- Preview event is compact (for example no inline `io.mindroom.tool_trace`), while the sidecar preserves full content fidelity
- Encrypted rooms: sidecar JSON is encrypted before upload (`message-content.json.enc`)

## Identity Management

The `MatrixID` class handles Matrix user ID parsing and agent identification:

```
mid = MatrixID.parse("@mindroom_assistant:example.com")
mid.username  # "mindroom_assistant"
mid.domain    # "example.com"
mid.full_id   # "@mindroom_assistant:example.com"

# Create from agent name
mid = MatrixID.from_agent("assistant", "example.com")

# Extract agent name (returns "code" if configured, None otherwise)
agent_name = extract_agent_name("@mindroom_code:localhost", config)
```

## Configuration

Matrix settings are derived from `config.yaml`:

```
agents:
  assistant:
    rooms: [lobby, dev]  # Room aliases (auto-created if needed)

teams:
  research_team:
    rooms: [research]
```

Room aliases are resolved to room IDs automatically. Full room IDs (starting with `!`) are also supported.

When a room doesn't exist, it's created with an AI-generated topic, power users are invited, and avatars are set from `avatars/rooms/{room_key}.png` if available.

# Agent Orchestration

The `MultiAgentOrchestrator` (in `src/mindroom/bot.py`) manages the lifecycle of all agents, teams, and the router.

## Boot Sequence

```
main() entry
       │
       ▼
┌──────────────────┐
│ Sync Credentials │
│ (.env → vault)   │
└────────┬─────────┘
         │
         ▼
┌──────────────────┐
│  Initialize()    │
│ ─────────────────│
│ 1. Parse config  │
│    (Pydantic)    │
│ 2. Load plugins  │
│ 3. Create "user" │
│    Matrix account│
│    (mindroom_user)│
│ 4. Create bots   │
│    for entities  │
└────────┬─────────┘
         │
         ▼
┌──────────────────┐
│    Start()       │
│ ─────────────────│
│ 1. try_start()   │
│    each bot      │
│ 2. Setup rooms   │
│    & memberships │
│ 3. Create sync   │
│    tasks         │
└────────┬─────────┘
         │
         ▼
┌──────────────────────────────────────┐
│  Concurrent Tasks (asyncio.wait)     │
│ ─────────────────────────────────────│
│ • orchestrator_task (sync loops)     │
│ • watcher_task (config file polling) │
│ • skills_watcher_task (skill cache)  │
└──────────────────────────────────────┘
```

**Key details:**

- **Entity order**: Router first, then agents, then teams
- **Room setup** (`_setup_rooms_and_memberships`): Router creates rooms, invites agents/users, bots join
- **Sync loops**: Each bot runs `_sync_forever_with_restart()` with automatic retry
- **Internal user identity**: `mindroom_user.username` is bootstrap-only; only `display_name` should change later

## Hot Reload

Config changes are detected via polling (`watch_file()` checks `st_mtime` every second):

1. On change, `update_config()` is called
1. `_identify_entities_to_restart()` computes diff using `model_dump(exclude_none=True)`
1. Affected entities are stopped, recreated, and restarted
1. Removed entities run `cleanup()` (leave rooms, stop bot)
1. New/restarted bots go through room setup

Skills are watched separately via `_watch_skills_task()` with cache invalidation.

## Message Handling

Event callbacks are wrapped in `_create_task_wrapper()` to run as background tasks, ensuring the sync loop is never blocked.

**`_on_message` flow:**

1. Skip own messages (except voice transcriptions from router)
1. Check sender authorization and handle edits
1. Check if already responded (`ResponseTracker`)
1. Router handles commands exclusively
1. Extract message context (mentions, thread history, non-agent mention detection)
1. Skip messages from other agents (unless mentioned)
1. Router performs AI routing when no agent mentioned and thread doesn't have multiple human participants
1. Check for team formation or individual response
1. Generate response and store memory

**`_on_image_message`**: Handles `RoomMessageImage` and `RoomEncryptedImage` events. Downloads and decrypts image data, then processes it through the agent. When no agent is mentioned, AI routing is used to select the appropriate agent, similar to text messages.

**`_on_reaction`**: Handles `ReactionEvent` for the interactive Q&A system (e.g., confirming or rejecting agent suggestions) and config confirmation workflows.

**Routing** (when no agent mentioned): Router uses `suggest_agent_for_message()` to pick the best agent based on room configuration and message content. Only routes when multiple agents are available. In threads where multiple non-agent users have posted, routing is skipped entirely — an explicit `@mention` is required. Non-MindRoom bots listed in `bot_accounts` are excluded from this detection.

## Concurrency

- Each bot runs its own sync loop via `_sync_forever_with_restart()`
- Sync loop failures trigger automatic restart with linear backoff (5s, 10s, 15s, ... up to 60s max)
- Event callbacks run as background tasks (never block the sync loop)
- `ResponseTracker` prevents duplicate replies
- `StopManager` handles cancellation of in-progress responses

### Graceful Shutdown

On `orchestrator.stop()`:

1. Shut down knowledge managers (`shutdown_knowledge_managers()`)
1. Cancel all sync tasks
1. Signal all bots to stop (`bot.running = False`)
1. Call `bot.stop()` for each bot (waits 5s for background tasks, closes Matrix client)

# Deployment

MindRoom can be deployed in various ways depending on your needs.

## Deployment Options

| Method                                                                                         | Best For                                                   |
| ---------------------------------------------------------------------------------------------- | ---------------------------------------------------------- |
| [Hosted Matrix + local MindRoom](https://docs.mindroom.chat/deployment/hosted-matrix/index.md) | Simplest setup: run only `uvx mindroom run` locally        |
| Full Stack (Docker Compose)                                                                    | All-in-one: bundled dashboard + Matrix (Synapse) + Element |
| [Docker (single container)](https://docs.mindroom.chat/deployment/docker/index.md)             | Single MindRoom runtime or when you already have Matrix    |
| [Kubernetes](https://docs.mindroom.chat/deployment/kubernetes/index.md)                        | Multi-tenant SaaS, production                              |
| Direct                                                                                         | Development, simple setups                                 |

## Bridges

Connect external messaging platforms to Matrix:

- [Bridges overview](https://docs.mindroom.chat/deployment/bridges/index.md) - available bridges and how they work
- [Telegram bridge](https://docs.mindroom.chat/deployment/bridges/telegram/index.md) - bridge Telegram chats via mautrix-telegram

## Google Services (Gmail/Calendar/Drive/Sheets)

Use these guides if you want users to connect Google accounts in the MindRoom frontend:

- [Google Services OAuth (Admin Setup)](https://docs.mindroom.chat/deployment/google-services-oauth/index.md) - one-time setup for shared/team deployments
- [Google Services OAuth (Individual Setup)](https://docs.mindroom.chat/deployment/google-services-user-oauth/index.md) - single-user bring-your-own OAuth app setup

## Quick Start

### Hosted Matrix + local MindRoom (simplest)

```
# Creates ~/.mindroom/config.yaml and ~/.mindroom/.env by default
uvx mindroom config init --profile public
$EDITOR ~/.mindroom/.env
uvx mindroom connect --pair-code ABCD-EFGH
uvx mindroom run
```

Generate the pair code in `https://chat.mindroom.chat` under: `Settings -> Local MindRoom`.

See [Hosted Matrix deployment](https://docs.mindroom.chat/deployment/hosted-matrix/index.md) for the full walkthrough.

### Full Stack (recommended)

```
git clone https://github.com/mindroom-ai/mindroom-stack
cd mindroom-stack
cp .env.example .env
$EDITOR .env  # add at least one AI provider key

docker compose up -d
```

### Direct (Development)

```
mindroom run --storage-path ./mindroom_data
```

The config file path is set via `MINDROOM_CONFIG_PATH` and otherwise defaults to `./config.yaml`, then `~/.mindroom/config.yaml`.

If you want local Matrix + Cinny with a host-installed MindRoom runtime (Linux/macOS), use:

```
mindroom local-stack-setup --synapse-dir /path/to/mindroom-stack/local/matrix
mindroom run --storage-path ./mindroom_data
```

### Docker (single container)

```
docker run -d \
  --name mindroom \
  -p 8765:8765 \
  -v ./config.yaml:/app/config.yaml:ro \
  -v ./mindroom_data:/app/mindroom_data \
  --env-file .env \
  ghcr.io/mindroom-ai/mindroom:latest
```

See the [Docker deployment guide](https://docs.mindroom.chat/deployment/docker/index.md) for the full single-container setup.

### Kubernetes

See the [Kubernetes deployment guide](https://docs.mindroom.chat/deployment/kubernetes/index.md) for Helm chart configuration.

## Required Configuration

Full stack:

```
# .env in the full stack repo
OPENAI_API_KEY=sk-...
# Add other providers as needed
```

Direct and single-container deployments:

1. **Matrix homeserver** - Set `MATRIX_HOMESERVER` (must allow open registration for agent accounts)
1. **AI provider keys** - At least one of `OPENAI_API_KEY`, `OPENROUTER_API_KEY`, etc.
1. **Persistent storage** - Mount `mindroom_data/` to persist agent state (including `sessions/`, `learning/`, and memory data)

See the [Docker guide](https://docs.mindroom.chat/deployment/docker/#environment-variables) for the complete environment variable reference.

Hosted `mindroom.chat` deployments additionally use values from `mindroom connect` (`MINDROOM_LOCAL_CLIENT_ID`, `MINDROOM_LOCAL_CLIENT_SECRET`, and `MINDROOM_NAMESPACE`) to bootstrap agent registrations and avoid collisions on shared homeservers.

# Hosted Matrix + Local Backend

This guide covers the simplest production-like setup:

- Matrix homeserver is hosted at `https://mindroom.chat`
- Web chat runs at `https://chat.mindroom.chat`
- You run only `mindroom run` locally via `uvx`

## What Runs Where

| Component            | Runs on                          | Purpose                                 |
| -------------------- | -------------------------------- | --------------------------------------- |
| `chat.mindroom.chat` | Hosted web app                   | Login UI and pairing UI                 |
| `mindroom.chat`      | Hosted Matrix + provisioning API | Matrix transport + local onboarding API |
| `uvx mindroom run`   | Your machine/server              | Agent orchestration, tools, model calls |

## Prerequisites

- Python 3.12+
- `uv` installed
- A Matrix account that can sign in to `chat.mindroom.chat`
- At least one AI provider API key

## 1. Initialize Local Config

```
uvx mindroom config init --profile public
```

This creates `~/.mindroom/config.yaml` and `~/.mindroom/.env` with hosted defaults.

## 2. Add AI Provider Key

Edit `~/.mindroom/.env` and set at least one provider key:

```
OPENAI_API_KEY=...
# or OPENROUTER_API_KEY=...
```

## 3. Pair This Install

1. Open `https://chat.mindroom.chat`.
1. Go to `Settings -> Local MindRoom`.
1. Click `Generate Pair Code`.
1. Run locally:

```
uvx mindroom connect --pair-code ABCD-EFGH
```

Pair code behavior:

- Valid for 600 seconds (10 minutes).
- Only used to bootstrap local pairing.

After successful pairing, local provisioning credentials are written to `~/.mindroom/.env` by default unless you use `--no-persist-env`.

## 4. Start MindRoom

```
uvx mindroom run
```

MindRoom then:

1. Connects to `MATRIX_HOMESERVER`
1. Creates/updates configured agent Matrix users
1. Joins/creates configured rooms
1. Starts processing messages

## Credential Model (Important)

`mindroom connect` returns local provisioning credentials:

- `MINDROOM_LOCAL_CLIENT_ID`
- `MINDROOM_LOCAL_CLIENT_SECRET`
- `MINDROOM_NAMESPACE`

`MINDROOM_LOCAL_CLIENT_ID` and `MINDROOM_LOCAL_CLIENT_SECRET` are **not Matrix user access tokens**. `MINDROOM_NAMESPACE` is appended to managed agent usernames and room aliases to avoid collisions on shared homeservers.

They can only call provisioning-service endpoints that accept local client credentials (for example agent registration flows). Revoke them from `Settings -> Local MindRoom` in the chat UI.

## Trust Model (Hosted Server vs Message Privacy)

For message *content*, this setup can be effectively zero-trust toward the homeserver operator when rooms are end-to-end encrypted.

- In E2EE rooms, the homeserver stores ciphertext and cannot read message bodies.
- The local `mindroom run` process holds your agent account keys and performs decryption locally.

Important limits:

- This does **not** hide metadata (room membership, timestamps, event IDs, sender IDs, traffic patterns).
- If a room is not encrypted, the homeserver can read plaintext.
- Any model/tool providers you send content to can still see the prompts/data you send to them.

So the precise claim is: encrypted Matrix message content is protected from the hosted homeserver, not that every part of the system is universally invisible.

## If You Self-Host Later

You can keep the same local flow and switch endpoints:

- `MATRIX_HOMESERVER=https://your-matrix.example.com`
- `MINDROOM_PROVISIONING_URL=https://your-matrix.example.com` (or your dedicated provisioning host)

Then run `mindroom connect` again with a fresh pair code from your own UI.

# Bridges

MindRoom uses [mautrix](https://docs.mau.fi/bridges/) bridges to connect external messaging platforms to Matrix. Bridges run as appservices alongside Synapse, creating ghost users for external contacts and relaying messages bidirectionally.

## Available Bridges

| Bridge                                                                      | Platform  | Mode                       | Status    |
| --------------------------------------------------------------------------- | --------- | -------------------------- | --------- |
| [Telegram](https://docs.mindroom.chat/deployment/bridges/telegram/index.md) | Telegram  | Puppet (login as yourself) | Available |
| Slack                                                                       | Slack     | -                          | Planned   |
| Email                                                                       | IMAP/SMTP | -                          | Planned   |

## How Bridges Work

Each bridge registers as a Matrix [Application Service](https://spec.matrix.org/latest/application-service-api/) with Synapse. The bridge:

1. Creates ghost users on Matrix for external contacts
1. Creates Matrix rooms for external chats
1. Relays messages between the external platform and Matrix in real time

In **puppet mode**, you log into your real account on the external platform. Your messages appear as coming from you on both sides, not from a bot.

## Adding a New Bridge

1. Create a config directory: `telegram-bridge/`, `slack-bridge/`, etc.
1. Add the bridge service to `compose.yaml`
1. Generate a registration file and mount it into Synapse
1. Add the registration path to `homeserver.yaml` under `app_service_config_files`
1. Restart Synapse and start the bridge

# Telegram Bridge

Bridge Telegram and Matrix using [mautrix-telegram](https://docs.mau.fi/bridges/python/telegram/) in **puppet mode**. Each user logs in with their own Telegram account, so messages appear as the real user on both sides.

## What Can You Do With This?

The bridge enables two main use cases:

1. **Talk to MindRoom agents from Telegram** -- Link a Telegram group to a Matrix room (like Lobby) so you can chat with AI agents directly from the Telegram app, without opening Element.
1. **Access Telegram chats from Matrix** -- Your existing Telegram conversations appear as Matrix rooms in Element, so you can use one client for everything.

Most users want use case 1. See [Bridging Matrix rooms to Telegram](#step-2-bridge-matrix-rooms-to-telegram) after setup.

## Architecture

```
Telegram Cloud <--> mautrix-telegram <--> Synapse <--> Element
                    (bridge bot)         (homeserver)   (client)
```

- **mautrix-telegram** runs locally and connects outbound to Telegram's API -- your Matrix server does NOT need to be publicly accessible
- Each Matrix user can log into their own Telegram account (puppeting)
- Messages flow bidirectionally in real time

## Prerequisites

### 1. Telegram API Credentials

1. Go to [my.telegram.org](https://my.telegram.org) and log in
1. Click "API development tools"
1. Create an app (title: "MindRoom Bridge", short name: "mindroom")
1. Note the **api_id** (numeric) and **api_hash** (string)

### 2. Telegram Bot

1. Message [@BotFather](https://t.me/BotFather) on Telegram
1. Send `/newbot`, choose a name and username
1. Note the **bot token** (format: `123456789:ABCdefGHI...`)

## Setup

### 1. Add credentials to config

Edit `telegram-bridge/config.yaml` and replace the placeholders in the `telegram:` section:

```
telegram:
    api_id: 12345678          # Your numeric api_id
    api_hash: abcdef123456    # Your api_hash string
    bot_token: 123456:ABC...  # Your bot token from BotFather
```

Also update the same values in your `.env`:

```
TELEGRAM_API_ID=12345678
TELEGRAM_API_HASH=abcdef123456
TELEGRAM_BOT_TOKEN=123456:ABC...
```

### 2. Recreate Synapse and start the bridge

Synapse needs a new volume mount for the bridge registration file, so it must be **recreated** (not just restarted):

```
# Recreate Synapse to pick up the new volume mount and bridge registration
docker compose up -d synapse

# Wait for Synapse to become healthy
docker compose ps synapse

# Start the bridge
docker compose up -d telegram-bridge
```

> **Note:** `docker compose restart synapse` will NOT work here because the `registration.yaml` volume mount is new in `compose.yaml`. A restart reuses the existing container; `up -d` recreates it with the updated mounts.

### 3. Verify

```
# Check bridge logs
docker compose logs telegram-bridge --tail 20

# Look for "Startup actions complete"
```

## Usage

### Step 1: Log in to Telegram via the bridge

Before you can bridge anything, you must link your Telegram account:

1. Open Element at your Element URL
1. Start a DM with `@telegrambot:your.matrix.domain`
1. Send `login`
1. Enter your phone number in international format (e.g., `+1234567890`)
1. Enter the verification code sent to your Telegram app
1. Your existing Telegram chats will appear as Matrix rooms

### Step 2: Bridge Matrix rooms to Telegram

This is the primary use case -- talking to MindRoom agents from Telegram.

The bridge connects a **Telegram group** to a **Matrix room**. You need a Telegram group on the Telegram side because that's what you'll open in the Telegram app to send and receive messages.

**For each Matrix room you want to access from Telegram** (e.g., Lobby):

1. **Create a Telegram group** in the Telegram app (e.g., name it "MindRoom Lobby")
1. **Add your bridge bot** (e.g., `@your_bridge_bot`) to that Telegram group
1. **In Element**, go to the Matrix room you want to bridge (e.g., Lobby)
1. **Invite the bridge bot**: invite `@telegrambot:your.matrix.domain` to the room
1. **Link the rooms**: in the Matrix room, send `!tg bridge` -- the bot will list your Telegram groups and let you pick which one to link

Once linked:

- Messages you send in the **Telegram group** appear in the **Matrix room** -- MindRoom agents will see and respond to them
- Agent responses in the **Matrix room** appear in the **Telegram group**
- You can chat with MindRoom agents entirely from the Telegram app

Repeat for any other Matrix rooms you want accessible from Telegram.

> **Why can't I just invite the bot directly?** The bridge bot (`@telegrambot`) is Matrix-side infrastructure -- it manages the bridge but isn't a Telegram chat. To use Telegram as your client, there must be a Telegram group for the Telegram app to display. The bridge connects that group to the Matrix room bidirectionally.

### Accessing Telegram chats from Matrix

After logging in (step 1), your Telegram chats automatically appear as Matrix rooms in Element. This lets you use Element as a unified client for both Matrix and Telegram conversations.

- **Private chats**: Automatically bridged as Matrix DMs
- **Groups**: Automatically bridged if within `sync_create_limit` (default: 30)
- **Additional groups**: Use `search <query>` in the bridge bot DM to find and bridge more

### Bot Commands Reference

Send these to `@telegrambot:your.matrix.domain` in a DM, or in a bridged room:

| Command          | Description                                                     |
| ---------------- | --------------------------------------------------------------- |
| `login`          | Link your Telegram account                                      |
| `logout`         | Unlink your Telegram account                                    |
| `ping`           | Check bridge connection status                                  |
| `search <query>` | Search your Telegram chats                                      |
| `!tg bridge`     | Link current Matrix room to a Telegram group (send in the room) |
| `unbridge`       | Unlink current room from Telegram                               |
| `sync`           | Re-sync Telegram chat list                                      |
| `help`           | Show all commands                                               |

## Configuration Reference

Key settings in `telegram-bridge/config.yaml`:

| Setting                       | Default              | Description                                 |
| ----------------------------- | -------------------- | ------------------------------------------- |
| `bridge.username_template`    | `telegram_{userid}`  | Matrix username pattern for Telegram ghosts |
| `bridge.displayname_template` | `{displayname} (TG)` | Display name pattern for Telegram users     |
| `bridge.sync_create_limit`    | `30`                 | Max chats to auto-create on first sync      |
| `bridge.sync_direct_chats`    | `true`               | Auto-bridge private chats                   |
| `bridge.encryption.allow`     | `true`               | Allow E2EE in bridged rooms                 |
| `bridge.permissions`          | See config           | Who can use the bridge and at what level    |

### Permission Levels

Set in `bridge.permissions`:

- `relaybot` - Messages relayed through the bot (not puppeted)
- `user` - Can use the bridge but not log in
- `puppeting` - Can log in with their Telegram account
- `full` - Full access including creating portals
- `admin` - Bridge administration

Default config gives `full` to all users on your homeserver domain.

## Troubleshooting

### Bridge won't start

- Check credentials: `api_id` must be numeric, `api_hash` must be a hex string, `bot_token` must be a valid BotFather token
- Check logs: `docker compose logs telegram-bridge --tail 50`
- Verify Synapse is healthy: `docker compose ps`

### Login fails

- Ensure `api_id` and `api_hash` are from the same Telegram app
- The bot token must be from a bot you own (not revoked)
- If you get "FLOOD_WAIT", wait the indicated time before retrying

### Messages not bridging

- Check the bridge is connected: DM the bot and send `ping`
- Verify Synapse has the registration: check `app_service_config_files` in `homeserver.yaml`
- Check bridge permissions in `config.yaml` - your user domain must have `full` or `puppeting`

### Double puppeting

To make your messages from Matrix appear as your real Telegram account (not the bridge bot):

1. This is automatic when you log in via `login` - puppet mode is the default
1. If messages still show as the bot, check `bridge.sync_with_custom_puppets` in config

### Database issues

The bridge uses SQLite stored in the `telegram-bridge` data volume. To reset:

```
docker compose stop telegram-bridge
rm <data-dir>/telegram-bridge/mautrix-telegram.db
docker compose up -d telegram-bridge
```

Note: This will require re-logging into Telegram.

### Registration out of sync

If Synapse reports appservice errors, regenerate the registration:

```
docker compose stop telegram-bridge
rm telegram-bridge/registration.yaml
# Temporarily set valid api_id in config.yaml, then:
docker compose run --rm --no-deps --entrypoint \
  "python -m mautrix_telegram -g -c /data/config.yaml -r /data/registration.yaml" \
  telegram-bridge
docker compose restart synapse
docker compose up -d telegram-bridge
```

## Maintenance

### Updating

```
docker compose pull telegram-bridge
docker compose up -d telegram-bridge
```

### Backup

Important data locations:

- `telegram-bridge/config.yaml` - Bridge configuration
- `telegram-bridge/registration.yaml` - Appservice registration
- Telegram bridge data volume - SQLite database with session data

# Google Services OAuth (Admin Setup)

This is the one-time setup for a shared Google OAuth app in MindRoom. After you finish these steps, users only click **Login with Google** in the frontend.

## Who This Is For

Use this guide if you are running MindRoom for a team, organization, or hosted deployment.

If you are a single local user and want to bring your own Google OAuth app, see [Google Services OAuth (Individual Setup)](https://docs.mindroom.chat/deployment/google-services-user-oauth/index.md).

## What You Need Before Starting

- Your MindRoom URL (local example: `http://localhost:8765`, production example: `https://mindroom.example.com`)
- Access to [Google Cloud Console](https://console.cloud.google.com/)
- Access to set MindRoom environment variables

The MindRoom callback path is always:

```
/api/google/callback
```

Your full callback URL is:

```
<your-mindroom-origin>/api/google/callback
```

## Step 1: Create a Google Cloud Project

1. Open [Google Cloud Console](https://console.cloud.google.com/).
1. Create a new project (or select an existing one).
1. Save the project ID. You will use it as `GOOGLE_PROJECT_ID`.

## Step 2: Enable APIs

1. In Google Cloud Console, go to **APIs & Services → Library**.
1. Enable:
1. Gmail API
1. Google Calendar API
1. Google Drive API
1. Google Sheets API

## Step 3: Configure OAuth Consent Screen

1. Go to **APIs & Services → OAuth consent screen**.
1. User type:
1. `External` for public or mixed users
1. `Internal` for Google Workspace-only
1. Fill required app info and save.
1. Add test users if app is still in testing mode.
1. Add scopes:
1. `https://www.googleapis.com/auth/gmail.readonly`
1. `https://www.googleapis.com/auth/gmail.modify`
1. `https://www.googleapis.com/auth/gmail.compose`
1. `https://www.googleapis.com/auth/calendar`
1. `https://www.googleapis.com/auth/spreadsheets`
1. `https://www.googleapis.com/auth/drive.file`
1. `openid`
1. `https://www.googleapis.com/auth/userinfo.email`
1. `https://www.googleapis.com/auth/userinfo.profile`

## Step 4: Create OAuth Client ID

1. Go to **APIs & Services → Credentials**.
1. Click **Create Credentials → OAuth client ID**.
1. Choose **Web application**.
1. Under **Authorized redirect URIs**, add:
1. Local: `http://localhost:8765/api/google/callback`
1. Production: `https://<your-domain>/api/google/callback`
1. Copy the generated client ID and client secret.

## Step 5: Configure MindRoom Environment

Set these env vars in your MindRoom deployment (`.env`, Kubernetes secret, or hosting config):

```
GOOGLE_CLIENT_ID=your-app-client-id.apps.googleusercontent.com
GOOGLE_CLIENT_SECRET=your-app-client-secret
GOOGLE_PROJECT_ID=your-project-id
GOOGLE_REDIRECT_URI=http://localhost:8765/api/google/callback
```

Notes:

- `GOOGLE_REDIRECT_URI` must match one of your Google Console redirect URIs exactly.
- If omitted, MindRoom defaults to `http://localhost:8765/api/google/callback`.

Restart MindRoom after setting env vars.

## Step 6: Verify MindRoom Is Configured

Run:

```
curl -s http://localhost:8765/api/google/status
```

Expected result includes:

- `"has_credentials": true`

If `connected` is false at this point, that is normal until a user authorizes.

## Step 7: Verify Frontend User Flow

1. Open **Integrations → Google Services**.
1. If setup is correct, the card shows **Ready to Connect**.
1. Users can now click **Login with Google** and authorize access.

## End User Steps (After Admin Setup)

Each user does only this:

1. Open **Integrations → Google Services**.
1. Click **Login with Google**.
1. Approve scopes.
1. Confirm status shows **Connected**.

## Production Notes

- Apps in testing mode are limited to test users.
- For broad public usage, complete Google OAuth verification (consent screen, policies, branding, etc.).
- Never commit `GOOGLE_CLIENT_SECRET` to git.

## Security Notes

- OAuth access/refresh tokens are stored in MindRoom credentials storage, typically:
- `mindroom_data/credentials/google_credentials.json`
- Restrict filesystem access to your MindRoom data directory.
- Revoke app access from Google account settings if needed.

## Troubleshooting

### "Google OAuth is not configured"

`GOOGLE_CLIENT_ID` or `GOOGLE_CLIENT_SECRET` is missing in the MindRoom environment.

### "Redirect URI mismatch"

Ensure all three are identical:

- `GOOGLE_REDIRECT_URI` in the MindRoom environment
- Redirect URI in Google Console
- Actual MindRoom callback URL

### Users cannot authorize while app is in testing mode

Add those users to OAuth consent screen test users.

# Google Services OAuth (Individual Setup)

This guide is for one person running MindRoom and creating their own Google OAuth app.

For team/shared deployments, use [Google Services OAuth (Admin Setup)](https://docs.mindroom.chat/deployment/google-services-oauth/index.md).

## What You Need Before Starting

- A Google account
- Access to Google Cloud Console
- A running MindRoom instance with the bundled dashboard (default URL: `http://localhost:8765`)

The callback path is always:

```
/api/google/callback
```

So the default full callback URL is:

```
http://localhost:8765/api/google/callback
```

## Step 1: Create Google Cloud Project

1. Open [Google Cloud Console](https://console.cloud.google.com/).
1. Create or select a project.
1. Save the project ID for `GOOGLE_PROJECT_ID`.

## Step 2: Enable APIs

1. Go to **APIs & Services → Library**.
1. Enable:
1. Gmail API
1. Google Calendar API
1. Google Drive API
1. Google Sheets API

## Step 3: Configure OAuth Consent Screen

1. Go to **APIs & Services → OAuth consent screen**.
1. Choose `External`.
1. Fill required fields and save.
1. Add your own email as a test user.
1. Add scopes:
1. `https://www.googleapis.com/auth/gmail.readonly`
1. `https://www.googleapis.com/auth/gmail.modify`
1. `https://www.googleapis.com/auth/gmail.compose`
1. `https://www.googleapis.com/auth/calendar`
1. `https://www.googleapis.com/auth/spreadsheets`
1. `https://www.googleapis.com/auth/drive.file`
1. `openid`
1. `https://www.googleapis.com/auth/userinfo.email`
1. `https://www.googleapis.com/auth/userinfo.profile`

## Step 4: Create OAuth Client ID

1. Go to **APIs & Services → Credentials**.
1. Click **Create Credentials → OAuth client ID**.
1. Choose **Web application**.
1. Add redirect URI:
1. `http://localhost:8765/api/google/callback`
1. Copy client ID and client secret.

## Step 5: Configure MindRoom

Add this to `.env` (or export in your shell):

```
MINDROOM_PORT=8765
GOOGLE_CLIENT_ID=your-client-id.apps.googleusercontent.com
GOOGLE_CLIENT_SECRET=your-client-secret
GOOGLE_PROJECT_ID=your-project-id
GOOGLE_REDIRECT_URI=http://localhost:8765/api/google/callback
```

Restart MindRoom.

## Step 6: Verify MindRoom Reads Credentials

Run:

```
curl -s http://localhost:8765/api/google/status
```

Expected:

- `"has_credentials": true`

## Step 7: Connect in Frontend

1. Open **Integrations → Google Services**.
1. Click **Login with Google**.
1. Sign in and approve requested scopes.
1. You should see **Connected** and your available services.

## Step 8: Enable Google Tools in `config.yaml`

After OAuth is connected, add Google tools to your agent config:

```
agents:
  email_assistant:
    display_name: Email Assistant
    role: Help manage and respond to emails
    tools:
      - gmail
      - google_calendar
      - google_sheets
    instructions:
      - Search important unread emails first.
      - Draft replies and ask for confirmation before sending.
```

Gmail tool capabilities include:

- `gmail_search`: Search emails with Gmail query syntax (for example `is:unread` or `from:boss@company.com`)
- `gmail_latest`: Read latest inbox emails
- `gmail_unread`: Read unread emails only

After editing `config.yaml`, restart MindRoom to reload configuration.

## Disconnect Later (Optional)

1. In MindRoom frontend, click **Disconnect Google Account**.
1. Optional: also revoke app access in [Google Account Permissions](https://myaccount.google.com/permissions).

## Troubleshooting

### "Admin Setup Required" shown in frontend

MindRoom does not have valid Google OAuth env vars yet.

### "Failed to complete OAuth flow"

Check redirect URI exact match between Google Cloud Console and MindRoom.

### Access blocked by Google

If your app is in testing mode, ensure your account is listed as a test user.

# Docker Deployment

Deploy MindRoom using Docker for simple, containerized deployments.

## Quick Start

MindRoom ships as a single runtime container that serves:

- the bot orchestrator
- the dashboard UI at `http://localhost:8765`
- the dashboard API at `http://localhost:8765/api`
- the OpenAI-compatible API at `http://localhost:8765/v1`

Run it with:

```
docker run -d \
  --name mindroom \
  -p 8765:8765 \
  -v ./config.yaml:/app/config.yaml:ro \
  -v ./mindroom_data:/app/mindroom_data \
  --env-file .env \
  ghcr.io/mindroom-ai/mindroom:latest
```

## Docker Compose

Create a `docker-compose.yml`:

```
services:
  mindroom:
    image: ghcr.io/mindroom-ai/mindroom:latest
    container_name: mindroom
    restart: unless-stopped
    ports:
      - "8765:8765"
    volumes:
      - ./config.yaml:/app/config.yaml:ro
      - ./mindroom_data:/app/mindroom_data
    env_file:
      - .env
    environment:
      - MINDROOM_STORAGE_PATH=/app/mindroom_data
      - LOG_LEVEL=${LOG_LEVEL:-INFO}
      - MATRIX_HOMESERVER=${MATRIX_HOMESERVER}
      # Optional: for self-signed certificates
      # - MATRIX_SSL_VERIFY=false
      # Optional: override server name for federation
      # - MATRIX_SERVER_NAME=example.com
```

Run with:

```
docker compose up -d
```

## Environment Variables

Key environment variables (set in `.env` or pass directly):

| Variable                | Description                                | Default                                         |
| ----------------------- | ------------------------------------------ | ----------------------------------------------- |
| `MATRIX_HOMESERVER`     | Matrix server URL                          | `http://localhost:8008`                         |
| `MATRIX_SSL_VERIFY`     | Verify SSL certificates                    | `true`                                          |
| `MATRIX_SERVER_NAME`    | Server name for federation (optional)      | -                                               |
| `MINDROOM_STORAGE_PATH` | Data storage directory                     | Relative to config file                         |
| `LOG_LEVEL`             | Logging level                              | `INFO`                                          |
| `MINDROOM_CONFIG_PATH`  | Path to config.yaml                        | `./config.yaml`, then `~/.mindroom/config.yaml` |
| `ANTHROPIC_API_KEY`     | Anthropic API key (if using Claude models) | -                                               |
| `OPENAI_API_KEY`        | OpenAI API key (if using OpenAI models)    | -                                               |
| `MINDROOM_API_KEY`      | API key for dashboard auth (standalone)    | - (open access)                                 |

Streaming responses are configured in `config.yaml` via `defaults.enable_streaming` (default: `true`).

If `MINDROOM_API_KEY` is set, the browser dashboard will prompt for the key via a same-origin login page before loading the UI.

## Building from Source

Build from the repository root:

```
docker build -t mindroom:dev -f local/instances/deploy/Dockerfile.mindroom .
```

The Dockerfile uses a multi-stage build with `uv` for dependency management and runs as a non-root user (UID 1000).

A `Dockerfile.mindroom-minimal` variant is also available, which builds a smaller image without pre-installed tool extras -- useful for sandbox runners.

## With Local Matrix

For development, run MindRoom alongside a local Matrix server:

```
# Start Matrix (Synapse + Postgres + Redis)
cd local/matrix && docker compose up -d

# Verify Matrix is running
curl -s http://localhost:8008/_matrix/client/versions

# Start MindRoom using the docker-compose.yml you created above
docker compose up -d
```

The local Matrix stack includes:

- **Synapse**: Matrix homeserver on port 8008
- **PostgreSQL**: Database backend
- **Redis**: Caching layer

If you're running the backend on the host (not in Docker), you can use `mindroom local-stack-setup` to start Synapse + MindRoom Cinny and persist local Matrix env vars automatically:

```
mindroom local-stack-setup --synapse-dir /path/to/mindroom-stack/local/matrix
mindroom run
```

## Health Checks

The container exposes a health endpoint on port 8765:

```
curl http://localhost:8765/api/health
```

## Data Persistence

MindRoom stores data in the `mindroom_data` directory:

- `sessions/` - Per-agent conversation history (SQLite)
- `learning/` - Per-agent Agno Learning state (SQLite, persistent across restarts)
- `chroma/` - ChromaDB vector store for agent/room memories
- `knowledge_db/` - Knowledge base vector stores
- `culture/` - Shared culture state
- `tracking/` - Response tracking to avoid duplicates
- `credentials/` - Synchronized secrets from `.env`
- `logs/` - Application logs
- `matrix_state.yaml` - Matrix connection state
- `encryption_keys/` - Matrix E2EE keys (if enabled)

## Sandbox Proxy Isolation

When configured, `shell`, `file`, and `python` tool calls can be proxied to a separate **sandbox-runner** sidecar container. The sidecar runs the same image but without access to secrets, credentials, or the primary data volume. This provides real process-level isolation for code-execution tools. Without proxy configuration, all tools execute locally in the MindRoom process.

See [Sandbox Proxy Isolation](https://docs.mindroom.chat/deployment/sandbox-proxy/index.md) for full documentation including Docker Compose examples, Kubernetes sidecar setup, host-machine-with-container mode, credential leases, and environment variable reference.

> [!TIP] For production, use a reverse proxy (Traefik, Nginx) in front of the MindRoom container when you want TLS, host routing, or additional auth layers. See `local/instances/deploy/docker-compose.yml` for an example with Traefik labels.

# Sandbox Proxy Isolation

When agents have code-execution tools (`shell`, `file`, `python`), they can read and modify anything on the filesystem — config files, credentials, application code. The **sandbox proxy** isolates these tools by forwarding their calls to a separate process (the **sandbox runner**) that has no access to secrets or sensitive data.

## How it works

```
┌──────────────────────────┐         HTTP          ┌──────────────────────────┐
│ Primary MindRoom runtime │  ── tool call ──▶     │ Sandbox runner           │
│ has secrets              │  ◀── result ───       │ no secrets               │
│ has credentials          │                       │ no credentials           │
│ has persistent data      │                       │ writable scratch only    │
└──────────────────────────┘                       └──────────────────────────┘
```

1. Agent invokes `shell.run_shell_command(...)` (or file/python tool)
1. Primary MindRoom runtime detects the tool is in the proxy list
1. Call is forwarded over HTTP to the sandbox runner
1. Runner executes the tool locally and returns the result
1. All other tools (API tools, search, etc.) execute in the primary MindRoom runtime as usual

The runner authenticates requests with a shared token (`MINDROOM_SANDBOX_PROXY_TOKEN`). For tools that need credentials (e.g., a shell tool that calls an authenticated API), the primary MindRoom runtime can create a short-lived **credential lease** that the runner consumes once — credentials never persist in the runner's memory.

## Deployment modes

### Docker Compose (sidecar container)

Add a `sandbox-runner` service alongside MindRoom. Both use the same image; the runner just has a different entrypoint and no access to `.env` or the data volume.

```
services:
  mindroom:
    image: ghcr.io/mindroom-ai/mindroom:latest
    env_file: .env
    volumes:
      - ./config.yaml:/app/config.yaml:ro
      - ./mindroom_data:/app/mindroom_data
    environment:
      - MINDROOM_SANDBOX_PROXY_URL=http://sandbox-runner:8766
      - MINDROOM_SANDBOX_PROXY_TOKEN=${MINDROOM_SANDBOX_PROXY_TOKEN}
      - MINDROOM_SANDBOX_EXECUTION_MODE=selective
      - MINDROOM_SANDBOX_PROXY_TOOLS=shell,file,python

  sandbox-runner:
    image: ghcr.io/mindroom-ai/mindroom:latest
    command: ["/app/run-sandbox-runner.sh"]
    user: "1000:1000"
    volumes:
      - sandbox-workspace:/app/workspace
    environment:
      - MINDROOM_SANDBOX_RUNNER_MODE=true
      - MINDROOM_SANDBOX_PROXY_TOKEN=${MINDROOM_SANDBOX_PROXY_TOKEN}
      - MINDROOM_CONFIG_PATH=/app/config.yaml
      - MINDROOM_STORAGE_PATH=/app/workspace/.mindroom

volumes:
  sandbox-workspace:
```

> [!IMPORTANT] The `sandbox-workspace` Docker volume is created as root by default. The runner runs as UID 1000, so you must fix ownership after first creating the volume: `bash docker run --rm -v sandbox-workspace:/workspace busybox chown -R 1000:1000 /workspace` Alternatively, omit the `user:` directive to run as root (less secure).

Key differences from the primary MindRoom runtime:

- **No `env_file`** — runner has no API keys, no Matrix credentials
- **No data volume** — runner cannot access `mindroom_data/`
- **Scratch workspace** — a dedicated volume for file operations
- **`MINDROOM_STORAGE_PATH`** — pointed at a writable location inside the workspace so the tool registry can initialize without access to the primary data volume

### Kubernetes (pod sidecar)

In Kubernetes the runner runs as a second container in the same pod, sharing `localhost` networking. See `cluster/k8s/instance/templates/deployment-mindroom.yaml` for the full manifest. The runner gets:

- An `emptyDir` volume for scratch workspace
- Read-only access to config (for plugin tool registration)
- No access to the secrets volume

### Host machine + Docker sandbox container

Run MindRoom directly on the host while isolating code-execution tools in a Docker container:

```
# 1. Start the sandbox runner container
docker run -d \
  --name mindroom-sandbox-runner \
  -p 8766:8766 \
  -e MINDROOM_SANDBOX_RUNNER_MODE=true \
  -e MINDROOM_SANDBOX_PROXY_TOKEN=your-secret-token \
  -e MINDROOM_STORAGE_PATH=/app/workspace/.mindroom \
  ghcr.io/mindroom-ai/mindroom:latest \
  /app/run-sandbox-runner.sh

# 2. Start MindRoom on the host with proxy config
export MINDROOM_SANDBOX_PROXY_URL=http://localhost:8766
export MINDROOM_SANDBOX_PROXY_TOKEN=your-secret-token
export MINDROOM_SANDBOX_EXECUTION_MODE=selective
export MINDROOM_SANDBOX_PROXY_TOOLS=shell,file,python
mindroom run
```

Or add the proxy variables to your `.env` file:

```
MINDROOM_SANDBOX_PROXY_URL=http://localhost:8766
MINDROOM_SANDBOX_PROXY_TOKEN=your-secret-token
MINDROOM_SANDBOX_EXECUTION_MODE=selective
MINDROOM_SANDBOX_PROXY_TOOLS=shell,file,python
```

This gives you the convenience of running MindRoom natively while keeping code-execution tools inside a container boundary.

> [!TIP] If you use plugin tools that also need proxying, mount your `config.yaml` into the runner container so it can register them: `bash docker run -d \ --name mindroom-sandbox-runner \ -p 8766:8766 \ -v ./config.yaml:/app/config.yaml:ro \ -e MINDROOM_CONFIG_PATH=/app/config.yaml \ -e MINDROOM_SANDBOX_RUNNER_MODE=true \ -e MINDROOM_SANDBOX_PROXY_TOKEN=your-secret-token \ -e MINDROOM_STORAGE_PATH=/app/workspace/.mindroom \ ghcr.io/mindroom-ai/mindroom:latest \ /app/run-sandbox-runner.sh`

## Environment variable reference

### Primary MindRoom runtime (proxy client)

| Variable                                        | Description                                        | Default                               |
| ----------------------------------------------- | -------------------------------------------------- | ------------------------------------- |
| `MINDROOM_SANDBOX_PROXY_URL`                    | URL of the sandbox runner                          | *(none — proxy disabled)*             |
| `MINDROOM_SANDBOX_PROXY_TOKEN`                  | Shared auth token                                  | *(required when proxy URL is set)*    |
| `MINDROOM_SANDBOX_EXECUTION_MODE`               | `selective`, `all`, `off`                          | *(unset — uses proxy tools list)*     |
| `MINDROOM_SANDBOX_PROXY_TOOLS`                  | Comma-separated tool names to proxy                | `*` (all, unless mode is `selective`) |
| `MINDROOM_SANDBOX_PROXY_TIMEOUT_SECONDS`        | HTTP timeout for proxy calls                       | `120`                                 |
| `MINDROOM_SANDBOX_CREDENTIAL_LEASE_TTL_SECONDS` | Credential lease lifetime                          | `60`                                  |
| `MINDROOM_SANDBOX_CREDENTIAL_POLICY_JSON`       | JSON mapping tool selectors to credential services | `{}`                                  |

### Sandbox runner

| Variable                                             | Description                                                                  | Default                                                      |
| ---------------------------------------------------- | ---------------------------------------------------------------------------- | ------------------------------------------------------------ |
| `MINDROOM_SANDBOX_RUNNER_MODE`                       | Set to `true` to indicate runner mode                                        | `false`                                                      |
| `MINDROOM_SANDBOX_PROXY_TOKEN`                       | Shared auth token (must match primary)                                       | *(required)*                                                 |
| `MINDROOM_SANDBOX_RUNNER_EXECUTION_MODE`             | `inprocess` or `subprocess`                                                  | `inprocess`                                                  |
| `MINDROOM_SANDBOX_RUNNER_SUBPROCESS_TIMEOUT_SECONDS` | Subprocess timeout                                                           | `120`                                                        |
| `MINDROOM_STORAGE_PATH`                              | Writable directory for tool registry init (e.g., `/app/workspace/.mindroom`) | `mindroom_data` next to config *(will fail if not writable)* |
| `MINDROOM_CONFIG_PATH`                               | Path to config.yaml (for plugin tool registration)                           | *(optional)*                                                 |

## Execution modes

| Mode                         | Behavior                                                                                                   |
| ---------------------------- | ---------------------------------------------------------------------------------------------------------- |
| `selective`                  | Only tools listed in `MINDROOM_SANDBOX_PROXY_TOOLS` are proxied. Recommended.                              |
| `all` / `sandbox_all`        | Every tool call goes through the proxy                                                                     |
| `off` / `local` / `disabled` | Proxy disabled even if URL is set                                                                          |
| *(unset)*                    | If `MINDROOM_SANDBOX_PROXY_TOOLS` is `*` or unset, proxies all tools; if set to a list, proxies only those |

## Credential leases

Some proxied tools need credentials (e.g., a `shell` tool that runs `git push` and needs an SSH key). Rather than giving the runner permanent access to secrets, the primary MindRoom runtime creates a **credential lease** — a short-lived, single-use token that the runner exchanges for credentials during execution.

Configure which credentials are shared via `MINDROOM_SANDBOX_CREDENTIAL_POLICY_JSON`:

```
export MINDROOM_SANDBOX_CREDENTIAL_POLICY_JSON='{"shell": ["github"], "python": ["openai"]}'
```

This shares the `github` credential service with `shell` tool calls and `openai` with `python` tool calls. Credentials are never stored in the runner — each lease is consumed on use and expires after the configured TTL.

## Security considerations

- The sandbox runner **never has** API keys, Matrix credentials, or access to `mindroom_data/`
- The shared token authenticates all proxy traffic — use a strong random value
- Credential leases are single-use by default and expire after 60 seconds
- The runner's `securityContext` drops all capabilities and disables privilege escalation
- In Kubernetes, the runner uses `emptyDir` for scratch space — no persistent state
- The primary MindRoom runtime **does not** mount the sandbox runner router — the `/api/sandbox-runner/` endpoints exist only in the runner process

## Per-agent configuration

MindRoom owns the default local-versus-worker routing policy. You can override which tools are routed through the sandbox proxy per agent (or set a default for all agents) in `config.yaml`:

```
defaults:
  worker_tools: [shell, file]        # route shell+file through the sandbox proxy for all agents by default

agents:
  code:
    tools: [file, shell, calculator]
    # inherits worker_tools from defaults → shell and file proxied

  research:
    tools: [web_search, calculator]
    worker_tools: []                 # explicitly no proxying

  untrusted:
    tools: [shell, file, python]
    worker_tools: [shell, file, python]   # proxy everything
```

The `worker_tools` field has three states:

| Value               | Behavior                                                                                                                                                  |
| ------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `null` (omitted)    | Use MindRoom's built-in default routing policy. Today that defaults to `coding`, `file`, `python`, and `shell` when those tools are enabled for the agent |
| `[]` (empty list)   | Explicitly disable sandbox proxying for this agent                                                                                                        |
| `["shell", "file"]` | Proxy exactly these tools for this agent                                                                                                                  |

Agent-level `worker_tools` overrides `defaults.worker_tools`. A sandbox proxy URL (`MINDROOM_SANDBOX_PROXY_URL`) must still be configured for any proxying to take effect.

## Worker Scope

`worker_tools` chooses which tools execute through the sandbox proxy. `worker_scope` chooses which proxied calls share the same worker-owned storage root. Some credential-backed custom tools stay local even if they are listed in `worker_tools`. Currently that local-only set is `gmail`, `google_calendar`, `google_sheets`, and `homeassistant`.

You can set `worker_scope` per agent or in `defaults`:

```
defaults:
  worker_tools: [shell, file]
  worker_scope: user_agent

agents:
  code:
    tools: [shell, file]
    # inherits worker_scope=user_agent

  reviewer:
    tools: [shell, file]
    worker_scope: shared

  bridge_helper:
    tools: [shell]
    worker_scope: room_thread
```

The supported values are:

| Value         | Behavior                                                       |
| ------------- | -------------------------------------------------------------- |
| `shared`      | One shared worker state per agent                              |
| `user`        | One worker state per requester                                 |
| `user_agent`  | One worker state per requester and agent                       |
| `room_thread` | One worker state per thread, or per room when no thread exists |

If `worker_scope` is unset, proxied tools still use the sandbox runner, but the request stays unscoped and no worker-specific storage root is selected. `worker_scope` also affects dashboard credential support and OpenAI-compatible agent eligibility. The dashboard credential UI only supports unscoped agents and agents with `worker_scope=shared`. Agents using `user`, `user_agent`, or `room_thread` must treat credentials as runtime-owned worker state.

## Without sandbox proxy

When no `MINDROOM_SANDBOX_PROXY_URL` is set, all tools execute directly in the primary MindRoom runtime process. This is fine for development but not recommended for production deployments where agents run untrusted code.

# Kubernetes Deployment

Deploy MindRoom on Kubernetes for production multi-tenant deployments.

## Architecture

MindRoom uses two Helm charts:

- **Instance Chart** (`cluster/k8s/instance/`) - Individual MindRoom runtime with bundled dashboard/API plus Matrix/Synapse
- **Platform Chart** (`cluster/k8s/platform/`) - SaaS control plane (API, frontend, provisioner)

## Prerequisites

- Kubernetes cluster (tested with k3s via kube-hetzner)
- kubectl and helm installed
- NGINX Ingress Controller
- cert-manager (for TLS certificates)

## Instance Deployment

### Via Provisioner API (Recommended)

```
export KUBECONFIG=./cluster/terraform/terraform-k8s/mindroom-k8s_kubeconfig.yaml

# Provision, check status, view logs
./cluster/scripts/mindroom-cli.sh provision 1
./cluster/scripts/mindroom-cli.sh status
./cluster/scripts/mindroom-cli.sh logs 1
```

### Direct Helm Installation

For debugging only:

```
helm upgrade --install instance-1 ./cluster/k8s/instance \
  --namespace mindroom-instances \
  --create-namespace \
  --set customer=1 \
  --set accountId="your-account-uuid" \
  --set baseDomain=mindroom.chat \
  --set anthropic_key="your-key" \
  --set openrouter_key="your-key" \
  --set supabaseUrl="https://your-project.supabase.co" \
  --set supabaseAnonKey="your-anon-key" \
  --set supabaseServiceKey="your-service-key"
```

## Secrets Management

API keys are mounted as files at `/etc/secrets/` (not environment variables). MindRoom reads paths from `*_API_KEY_FILE` environment variables:

```
env:
  - name: ANTHROPIC_API_KEY_FILE
    value: "/etc/secrets/anthropic_key"
  - name: OPENROUTER_API_KEY_FILE
    value: "/etc/secrets/openrouter_key"
```

## Ingress

Each instance gets three hosts:

- `{customer}.{baseDomain}` - MindRoom dashboard and API
- `{customer}.api.{baseDomain}` - Direct API access
- `{customer}.matrix.{baseDomain}` - Matrix/Synapse server

## Platform Deployment

```
# Create values file from example
cp cluster/k8s/platform/values-staging.example.yaml cluster/k8s/platform/values-staging.yaml
# Edit with your configuration

helm upgrade --install platform ./cluster/k8s/platform \
  -f ./cluster/k8s/platform/values-staging.yaml \
  --namespace mindroom-staging
```

The namespace must match `mindroom-{environment}` where `environment` is set in values.

Platform ingress hosts:

- `app.{domain}` - Platform frontend
- `api.{domain}` - Platform backend API
- `webhooks.{domain}/stripe` - Stripe webhooks

## Local Development with Kind

```
just cluster-kind-fresh              # Start cluster with everything
just cluster-kind-port-frontend      # http://localhost:3000
just cluster-kind-port-backend       # http://localhost:8000
just cluster-kind-down               # Clean up
```

See `cluster/k8s/kind/README.md` for details.

## CLI Helper

```
./cluster/scripts/mindroom-cli.sh list              # List instances
./cluster/scripts/mindroom-cli.sh status            # Overall status
./cluster/scripts/mindroom-cli.sh logs <id>         # View logs
./cluster/scripts/mindroom-cli.sh provision <id>    # Create instance
./cluster/scripts/mindroom-cli.sh deprovision <id>  # Remove instance
./cluster/scripts/mindroom-cli.sh upgrade <id>      # Upgrade instance
```

Reads configuration from `saas-platform/.env`.

## Provisioner API

All endpoints require bearer token (`PROVISIONER_API_KEY`).

| Endpoint                           | Method | Description                        |
| ---------------------------------- | ------ | ---------------------------------- |
| `/system/provision`                | POST   | Create or re-provision an instance |
| `/system/instances/{id}/start`     | POST   | Start a stopped instance           |
| `/system/instances/{id}/stop`      | POST   | Stop a running instance            |
| `/system/instances/{id}/restart`   | POST   | Restart an instance                |
| `/system/instances/{id}/uninstall` | DELETE | Remove an instance                 |
| `/system/sync-instances`           | POST   | Sync states between DB and K8s     |

Example provision request:

```
curl -X POST "https://api.mindroom.chat/system/provision" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $PROVISIONER_API_KEY" \
  -d '{"account_id": "uuid", "subscription_id": "sub-123", "tier": "starter"}'
```

The provisioner creates the namespace, generates URLs, deploys via Helm, and updates status in Supabase.

## Deployment Scripts

```
cd saas-platform
./deploy.sh platform-frontend          # Deploy platform frontend
./deploy.sh platform-backend           # Deploy platform backend
./redeploy-mindroom.sh         # Redeploy all customer MindRoom instances
```

## Multi-Tenant Architecture

Each customer instance gets:

- Separate Kubernetes deployment in `mindroom-instances` namespace
- Isolated PersistentVolumeClaim for data
- Own Matrix/Synapse server (SQLite)
- Independent ConfigMap configuration
- Dedicated ingress routes

Platform services run in `mindroom-{environment}` namespace.

# CLI Reference

MindRoom provides a command-line interface for managing agents.

## Basic Usage

```
mindroom [OPTIONS] COMMAND [ARGS]...
```

## Commands

```
 Usage: root [OPTIONS] COMMAND [ARGS]...

 AI agents that live in Matrix and work everywhere via bridges.

 Quick start:
 mindroom config init   Create a starter config
 mindroom run           Start the system

╭─ Options ──────────────────────────────────────────────────────────────────────────────╮
│ --install-completion            Install completion for the current shell.              │
│ --show-completion               Show completion for the current shell, to copy it or   │
│                                 customize the installation.                            │
│ --help                -h        Show this message and exit.                            │
╰────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Commands ─────────────────────────────────────────────────────────────────────────────╮
│ version             Show the current version of Mindroom.                              │
│ run                 Run the mindroom multi-agent system.                               │
│ doctor              Check your environment for common issues.                          │
│ connect             Pair this local MindRoom install with the hosted provisioning      │
│                     service.                                                           │
│ local-stack-setup   Start local Synapse + MindRoom Cinny using Docker only.            │
│ config              Manage MindRoom configuration files.                               │
╰────────────────────────────────────────────────────────────────────────────────────────╯
```

## version

Show the current MindRoom version.

```
 Usage: root version [OPTIONS]

 Show the current version of Mindroom.


╭─ Options ──────────────────────────────────────────────────────────────────────────────╮
│ --help  -h        Show this message and exit.                                          │
╰────────────────────────────────────────────────────────────────────────────────────────╯
```

## run

Start MindRoom with your configuration.

```
 Usage: root run [OPTIONS]

 Run the mindroom multi-agent system.

 This command starts the multi-agent bot system which automatically:
 - Creates all necessary user and agent accounts
 - Creates all rooms defined in config.yaml
 - Manages agent room memberships
 - Starts the bundled dashboard/API server (disable with --no-api)

╭─ Options ──────────────────────────────────────────────────────────────────────────────╮
│ --log-level     -l              TEXT     Set the logging level (DEBUG, INFO, WARNING,  │
│                                          ERROR)                                        │
│                                          [env var: LOG_LEVEL]                          │
│                                          [default: INFO]                               │
│ --storage-path  -s              PATH     Base directory for persistent MindRoom data   │
│                                          (state, sessions, tracking)                   │
│                                          [default: mindroom_data]                      │
│ --api               --no-api             Start the bundled dashboard/API server        │
│                                          alongside the bot                             │
│                                          [default: api]                                │
│ --api-port                      INTEGER  Port for the bundled dashboard/API server     │
│                                          [default: 8765]                               │
│ --api-host                      TEXT     Host for the bundled dashboard/API server     │
│                                          [default: 0.0.0.0]                            │
│ --help          -h                       Show this message and exit.                   │
╰────────────────────────────────────────────────────────────────────────────────────────╯
```

## connect

Pair this local MindRoom install with a provisioning service.

Default provisioning URL is `https://mindroom.chat` unless you override it with `--provisioning-url` or `MINDROOM_PROVISIONING_URL`.

```
mindroom connect --pair-code ABCD-EFGH
```

On success (default `--persist-env`), this writes to `.env` next to `config.yaml`:

- `MINDROOM_PROVISIONING_URL`
- `MINDROOM_LOCAL_CLIENT_ID`
- `MINDROOM_LOCAL_CLIENT_SECRET`

If your config still contains the owner placeholder token `__MINDROOM_OWNER_USER_ID_FROM_PAIRING__`, `connect` will auto-replace it when pairing returns a valid `owner_user_id`.

Use `--no-persist-env` if you want to export variables only for the current shell session.

```
mindroom connect --pair-code ABCD-EFGH --no-persist-env
```

Use `--provisioning-url` for non-default deployments:

```
mindroom connect \
  --pair-code ABCD-EFGH \
  --provisioning-url https://matrix.example.com
```

## local-stack-setup

Start local Synapse and the MindRoom Cinny client container for development.

By default this command also writes `MATRIX_HOMESERVER`, `MATRIX_SERVER_NAME`, and `MATRIX_SSL_VERIFY=false` into `.env` next to your active `config.yaml` so `mindroom run` works without inline env exports.

```
 Usage: root local-stack-setup [OPTIONS]

 Start local Synapse + MindRoom Cinny using Docker only.


╭─ Options ──────────────────────────────────────────────────────────────────────────────╮
│ --synapse-dir                                 PATH                 Directory           │
│                                                                    containing Synapse  │
│                                                                    docker-compose.yml  │
│                                                                    (from               │
│                                                                    mindroom-stack      │
│                                                                    settings).          │
│                                                                    [default:           │
│                                                                    local/matrix]       │
│ --homeserver-url                              TEXT                 Homeserver URL that │
│                                                                    Cinny and MindRoom  │
│                                                                    should use.         │
│                                                                    [default:           │
│                                                                    http://localhost:8… │
│ --server-name                                 TEXT                 Matrix server name  │
│                                                                    (default: inferred  │
│                                                                    from                │
│                                                                    --homeserver-url    │
│                                                                    hostname).          │
│                                                                    [default: None]     │
│ --cinny-port                                  INTEGER RANGE        Local host port for │
│                                               [1<=x<=65535]        the MindRoom Cinny  │
│                                                                    container.          │
│                                                                    [default: 8080]     │
│ --cinny-image                                 TEXT                 Docker image for    │
│                                                                    MindRoom Cinny.     │
│                                                                    [default:           │
│                                                                    ghcr.io/mindroom-a… │
│ --cinny-container-n…                          TEXT                 Container name for  │
│                                                                    MindRoom Cinny.     │
│                                                                    [default:           │
│                                                                    mindroom-cinny-loc… │
│ --skip-synapse                                                     Skip starting       │
│                                                                    Synapse (assume it  │
│                                                                    is already          │
│                                                                    running).           │
│ --persist-env             --no-persist-env                         Persist Matrix      │
│                                                                    local dev settings  │
│                                                                    to .env next to     │
│                                                                    config.yaml.        │
│                                                                    [default:           │
│                                                                    persist-env]        │
│ --help                -h                                           Show this message   │
│                                                                    and exit.           │
╰────────────────────────────────────────────────────────────────────────────────────────╯
```

## Examples

### Basic run

```
mindroom run
```

### Debug logging

```
mindroom run --log-level DEBUG
```

### Custom storage path

```
mindroom run --storage-path /data/mindroom
```

### Pair local install with hosted provisioning

```
mindroom connect --pair-code ABCD-EFGH
```

### Start local Synapse + Cinny (default local setup)

```
mindroom local-stack-setup --synapse-dir /path/to/mindroom-stack/local/matrix
```

### Start local stack without writing `.env`

```
mindroom local-stack-setup --no-persist-env
```

### Show version

```
mindroom version
```
