Metadata-Version: 2.4
Name: nanocoder
Version: 0.2.1
Summary: A multi-provider coding agent with Textual TUI (supports Gemini, OpenAI, Anthropic)
Project-URL: Homepage, https://github.com/yuxiang-wu/nanocoder
Project-URL: Repository, https://github.com/yuxiang-wu/nanocoder
Author: Yuxiang Wu
License-Expression: MIT
License-File: LICENSE
Keywords: agent,ai,anthropic,coding-assistant,gemini,llm,openai,textual,tui
Classifier: Development Status :: 4 - Beta
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Software Development
Classifier: Topic :: Utilities
Requires-Python: >=3.13
Requires-Dist: anthropic>=0.40.0
Requires-Dist: google-genai>=1.54.0
Requires-Dist: openai>=1.60.0
Requires-Dist: rich>=14.0.0
Requires-Dist: textual>=0.90.0
Description-Content-Type: text/markdown

# 🤖 Nanocoder

A minimal, multi-provider coding agent with a beautiful Textual TUI. Supports **Google Gemini**, **OpenAI**, and **Anthropic Claude** via provider-native APIs.

![Python 3.13+](https://img.shields.io/badge/python-3.13+-blue.svg)
![License MIT](https://img.shields.io/badge/license-MIT-green.svg)

## ✨ Features

- **Multi-Provider Support**: Switch between Gemini, OpenAI, and Anthropic with a single environment variable
- **Unified Agent Loop**: Same UX across all providers
- **Streaming Responses**: Real-time text and thought/reasoning display
- **Parallel Tool Execution**: Concurrent file operations and commands
- **Trajectory Logging**: JSONL traces for debugging, reproducibility, and training data
- **Beautiful TUI**: Modern terminal interface with Textual

## 🧭 Overview

Nanocoder pairs a Textual front-end with a provider-agnostic agent core so the UX stays identical whether you call Gemini, OpenAI, or Anthropic. The codebase is intentionally compact and organized around a few focused modules:

- **CLI + TUI (`nanocoder/cli.py`, `nanocoder/app_tui.py`)** – boots the Textual application, renders streaming text/thought panes, and wires up keyboard shortcuts plus chat commands like `/trace`.
- **Agent core (`nanocoder/agent/core.py`)** – streams `LLMSession` events, manages multi-iteration tool calling, and records trajectories for every turn.
- **Tooling layer (`nanocoder/agent/tools.py`, `nanocoder/agent/exec.py`)** – exposes filesystem/shell helpers with JSON Schema contracts and executes them in parallel with callback hooks for the UI.
- **LLM adapters (`nanocoder/llm/*.py`)** – wrap provider-native SDKs while converting their streaming outputs into the unified event model.
- **Tracing (`nanocoder/tracing/*`)** – produces JSONL traces in `.nanocoder_traces/` for debugging and reproducibility.

## 🚀 Quick Start

### Installation

```bash
# Using uv (recommended)
uv tool install nanocoder

# Or clone and install
git clone https://github.com/yuxiang-wu/nanocoder
cd nanocoder
uv sync
```

### Setup API Key

```bash
# For Gemini (default)
export GEMINI_API_KEY="your-key"

# Or for OpenAI
export OPENAI_API_KEY="your-key"
export NANOCODER_PROVIDER="openai"

# Or for Anthropic
export ANTHROPIC_API_KEY="your-key"
export NANOCODER_PROVIDER="anthropic"
```

### Run

```bash
nanocoder
```

### Repo at a Glance

| Area | What lives there |
|------|------------------|
| `nanocoder/__init__.py` | Package init, version (single source of truth) |
| `nanocoder/app_tui.py` | Textual widgets, streaming UI, keyboard bindings |
| `nanocoder/agent/` | Provider-agnostic loop, tool registry, parallel executor |
| `nanocoder/llm/` | Adapters for Gemini, OpenAI Responses, Anthropic Messages |
| `nanocoder/tracing/` | JSONL schema + logger used for `.nanocoder_traces/` |

## 🛠️ Available Tools

| Tool | Description |
|------|-------------|
| `read_file` | Read file contents with optional line ranges |
| `edit_file` | Create or edit files using search & replace |
| `run_command` | Execute shell commands with persistent working directory |

## ⚙️ Configuration

All configuration is via environment variables:

| Variable | Default | Description |
|----------|---------|-------------|
| `NANOCODER_PROVIDER` | `gemini` | Provider: `gemini`, `openai`, `anthropic` |
| `NANOCODER_MODEL` | (per provider) | Model identifier (see defaults below) |
| `NANOCODER_SHOW_THOUGHTS` | `1` | Show thought panel (0 or 1) |
| `NANOCODER_TRACE_DIR` | `.nanocoder_traces` | Trace output directory |
| `NANOCODER_MAX_TOOL_WORKERS` | `10` | Max concurrent tool executions |

### Provider-Specific Keys

| Provider | API Key Variable |
|----------|------------------|
| Gemini | `GEMINI_API_KEY` |
| OpenAI | `OPENAI_API_KEY` |
| Anthropic | `ANTHROPIC_API_KEY` |

### Default Models

| Provider | Default Model | Thinking/Reasoning |
|----------|---------------|-------------------|
| Gemini | `gemini-3-pro-preview` | Thought summaries (high effort) |
| OpenAI | `gpt-5.1-codex` | Detailed reasoning summaries |
| Anthropic | `claude-opus-4-5-20251101` | Extended thinking (10k token budget) |

All providers show their internal reasoning/thinking in the UI's thought panel.

## ⌨️ Keyboard Shortcuts

| Key | Action |
|-----|--------|
| `Ctrl+C` | Quit |
| `Ctrl+L` | Clear chat history |
| `Escape` | Focus input |

## 💬 Commands

| Command | Description |
|---------|-------------|
| `/quit` | Exit the application |
| `/clear` | Clear chat history |
| `/help` | Show help information |
| `/provider` | Show current provider and model |
| `/trace` | Show trace file path |

## 📊 Trajectory Logging

Every session creates a JSONL trace file for debugging and reproducibility:

```
.nanocoder_traces/20251227_143022_a1b2c3d4_gemini.jsonl
```

### Trace Contents

Each trace includes:
- **Machine metadata**: hostname, platform, Python version, Nanocoder version
- **Model responses**: full text, reasoning/thinking, start/end timestamps, token usage
- **Tool calls**: arguments and results
- **Timing**: sufficient for replay without streaming

Trace events: `run.start`, `run.end`, `turn.start`, `turn.end`, `model.request`, `model.response`, `tool.start`, `tool.end`, `error`

## 🏗️ Architecture

```
nanocoder/
├── __init__.py     # Package init + version
├── cli.py          # Entry point
├── app_tui.py      # Textual TUI
├── agent/
│   ├── core.py     # Provider-agnostic agent loop
│   ├── tools.py    # Tool definitions + registry
│   └── exec.py     # Parallel tool execution
├── llm/
│   ├── base.py     # Event types & LLMSession protocol
│   ├── gemini.py   # Gemini adapter (google.genai)
│   ├── openai.py   # OpenAI adapter (Responses API)
│   └── anthropic.py # Anthropic adapter (Messages API)
└── tracing/
    ├── logger.py   # TrajectoryLogger + JSONL output
    └── schema.py   # Trace event dataclasses
```

### Design Principles

1. **Unify on events, not messages**: Provider adapters maintain native conversation state
2. **Provider correctness first**: No cross-provider message normalization
3. **Trace everything**: Consistent event schema for debugging, reproducibility, and training
4. **Minimal surface area**: Thin adapters, provider-agnostic core loop

## 📝 License

MIT License - see [LICENSE](LICENSE) for details.

## 🙏 Acknowledgments

Built with:
- [google-genai](https://github.com/googleapis/python-genai) - Gemini API
- [openai](https://github.com/openai/openai-python) - OpenAI API
- [anthropic](https://github.com/anthropics/anthropic-sdk-python) - Anthropic API
- [Textual](https://github.com/Textualize/textual) - Terminal UI framework
- [Rich](https://github.com/Textualize/rich) - Rich text formatting
