Metadata-Version: 2.4
Name: vicode
Version: 0.2.2
Summary: TUI coding agent for local LLMs with supervisor system
Author: Endika
License: MIT
Project-URL: Homepage, https://github.com/endika/vicode
Project-URL: Repository, https://github.com/endika/vicode
Keywords: ai,agent,tui,llm,coding,cli
Classifier: Development Status :: 3 - Alpha
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Software Development :: Libraries
Requires-Python: >=3.11
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: openai>=2.0.0
Requires-Dist: textual>=3.0.0
Requires-Dist: ddgs>=8.0.0
Requires-Dist: engram-core>=0.3.0
Dynamic: license-file

# vicode

TUI coding agent for local LLMs. Like Claude Code but runs on your own hardware.

## Install

```bash
pip install vicode
```

## Quick Start

```bash
# Start llama-server with your model
llama-server -m model.gguf -p 8080 --jinja

# Launch vicode in any project directory
cd my-project
vicode
```

On first run, vicode auto-detects your local LLM server. If none is found, an interactive setup wizard configures your provider (OpenAI, Anthropic, OpenRouter, Groq, or custom endpoint).

## Features

- **13 tools**: bash, file read/write/edit, grep, glob, ls, web search/fetch, memory, undo, explore, plan
- **Supervisor system**: Safety guardian validates actions against plans, memory guardian auto-stores learnings, task tracker shows progress
- **Persistent memory**: Remembers preferences, error fixes, and patterns across sessions (powered by engram)
- **Sub-agents**: Explore and Plan run with isolated context to avoid polluting the main conversation
- **Slash commands**: `/help`, `/clear`, `/compact`, `/init`, `/undo`, `/model`, `/provider`, `/status`, `/memory`
- **Custom commands**: Drop `.md` files in `.vicode/commands/` for project-specific shortcuts
- **Provider support**: Local (llama-server, Ollama, LM Studio), OpenAI, Anthropic, OpenRouter, Groq

## Configuration

- `VICODE.md` / `CLAUDE.md` / `AGENT.md` in project root are loaded into the system prompt
- `~/.vicode/config.json` stores provider settings
- `vicode --setup` to reconfigure provider

## Requirements

- Python 3.11+
- A running LLM server (llama-server, Ollama) or API key for a cloud provider
