Cognithor · Agent OS

Local-first, autonomous agent operating system for personal AI assistance.

Cognition + Thor — Intelligence with Power

GitHub Stars Python Tests Coverage Lint License Release
View on GitHub Download Latest
v0.35.5 — A2A Delegation, Sandbox Enforcement, Proportional Iteration Caps, 0 Lint Errors
16
LLM Providers
17
Channels
53
MCP Tools
10,208
Tests
89%
Coverage

Features

🧠

16 LLM Providers

Ollama, LM Studio (both local), OpenAI, Anthropic, Gemini, Groq, DeepSeek, Mistral, Together, OpenRouter, xAI, Cerebras, GitHub Models, AWS Bedrock, Hugging Face, Moonshot. Auto-detects from API key.

📡

17 Channels

CLI, Web UI, REST API, Telegram (poll + webhook), Discord, Slack, WhatsApp, Signal, iMessage, Teams, Matrix, Google Chat, Mattermost, Feishu, IRC, Twitch, Voice (STT/TTS with wake word).

💾

5-Tier Cognitive Memory

Core identity, episodic logs, semantic knowledge graph, procedural skills, working memory. 3-channel hybrid search: BM25 + vector embeddings + graph traversal.

🛡️

Security & Audit

Deterministic Gatekeeper (4 risk levels), 4-level sandbox, SHA-256 audit chain, Fernet AES-256 token encryption, API rate limiting, CORS hardening, XSS prevention. 80-item audit with 0 remaining.

🎛️

React Control Center

Full dashboard (React 19 + Vite 7) with integrated chat, voice mode, live config editing, agent management, prompt editing, cron jobs, MCP servers, A2A settings. Pre-built UI ships with the installer.

🔌

53 MCP Tools & A2A

10 tool modules: filesystem, shell, memory, web, browser, media, vault, synthesis, code, skills. A2A protocol (Linux Foundation RC v1.0) with Planner delegation — agents can discover and delegate tasks to remote agents.

⚙️

PGE Architecture

Planner (LLM) → Gatekeeper (deterministic policy) → Executor (DAG-based parallel sandbox). No hallucinated tool calls — every action is policy-checked before execution.

🔊

Voice & Chat

Wake word ("Jarvis") with Levenshtein + phonetic matching, conversation mode, Piper TTS (Thorsten Emotional), Whisper STT, integrated chat with WebSocket streaming.

📚

Knowledge Vault

Obsidian-compatible Markdown vault with YAML frontmatter, tags, backlinks, full-text search. Knowledge synthesis with confidence ratings, contradiction detection, and gap analysis.

🔐

100% Local-First

All data stays on your machine. No cloud required — Ollama runs fully local. Optional cloud LLM providers available. Full GDPR compliance toolkit.

🤖

Human Feel

Personality Engine (warmth, humor, greetings), sentiment detection (frustrated/urgent/confused/positive), user preference learning, real-time status callbacks, multilingual error messages.

📖

Procedural Learning

Reflector auto-synthesizes reusable skills from successful sessions. Learned procedures are suggested for future similar requests. Skill marketplace with ratings and remote registry.

🏪

Community Marketplace

Install, search, rate, and report community skills from a GitHub-hosted registry. Publisher verification with 4 trust levels. 5-check validation pipeline. ToolEnforcer runtime sandboxing.

🌍

i18n — 3 Languages

Full internationalization with English, German, and Chinese (Simplified). 244 translated keys per locale, SHA-256 integrity verification, curated Planner system prompts. Add new languages with a single JSON file.

Architecture

┌───────────────────────────────────────────────────────────────┐
│          Control Center UI (React 19 + Vite 7)              │
│  Config · Agents · Chat · Voice · Prompts · Cron · MCP · A2A  │
├───────────────────────────────────────────────────────────────┤
│        REST API (FastAPI, 20+ endpoints, port 8741)         │
├───────────────────────────────────────────────────────────────┤
│                     Channels (17)                          │
│  CLI · Web · Telegram · Discord · Slack · WhatsApp · Voice  │
├───────────────────────────────────────────────────────────────┤
│                   Gateway Layer                            │
│  Session · Agent Loop · Personality · Sentiment · Prefs    │
├───────────────────────────────────────────────────────────────┤
│      Context Pipeline (Memory · Vault · Episodes)          │
├────────────┬─────────────┬────────────────────────────────────┤
│  Planner   │  Gatekeeper  │  Executor (DAG parallel)       │
│  (LLM)     │  (Policy)    │  (Sandbox)                     │
├────────────┴─────────────┴────────────────────────────────────┤
│              MCP Tool Layer (53 tools, 10 modules)          │
│  Filesystem · Shell · Memory · Web · Browser · Media       │
│  Vault · Synthesis · Code · Skills Marketplace              │
├───────────────────────────────────────────────────────────────┤
│              Multi-LLM Backend Layer (16)                   │
├───────────────────────────────────────────────────────────────┤
│              5-Tier Cognitive Memory                        │
│  Core · Episodic · Semantic · Procedural · Working          │
└───────────────────────────────────────────────────────────────┘

Quick Start

From clone to running agent in under 5 minutes.

# Option A: One-Click (Windows) — no Node.js needed
# Double-click start_cognithor.bat -> Browser opens -> Power On -> Done.
# Auto-installs Python + Ollama via winget if missing.

# Option B: Manual install
git clone https://github.com/Alex8791-cyber/cognithor.git
cd cognithor
pip install -e ".[all,dev]"

# Pull Ollama models
ollama pull qwen3:32b       # Planner (20 GB VRAM)
ollama pull qwen3:8b        # Executor (6 GB VRAM)
ollama pull nomic-embed-text # Embeddings

# Start
cognithor                    # Interactive CLI
cognithor --lite             # Lite mode (6 GB VRAM total)

No GPU? Use cognithor --lite or set a cloud API key. See LLM Providers for all 16 options.