Metadata-Version: 2.4
Name: ai-input-output
Version: 0.1.1
Summary: AI-powered pipe-friendly terminal command assistant using litellm + OpenRouter.
Author: Gabriel Huang
License: MIT
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Requires-Dist: litellm>=1.0.0
Requires-Dist: pyyaml>=6.0
Requires-Dist: pydantic>=2.0.0

# ai-input-output - AI Terminal Assistant

An intelligent terminal command assistant powered by LiteLLM and OpenRouter. Get instant help with shell commands, log analysis, and file operations directly in your terminal.

## Features

- 🤖 **Smart Command Generation**: Get exactly the commands you need, with explanations
- 📊 **Log Analysis**: Pipe logs, errors, or any text for instant analysis
- 🔄 **Interactive Mode**: Execute suggested commands with a single keypress
- 📁 **File Support**: Analyze files directly with `-f` flag
- 🎯 **Minimal & Focused**: Generates one command by default, up to 3 for complex tasks
- 🔒 **Output Truncation**: Automatically manages context to prevent token overflow

## Installation

```bash
# Install from PyPI
pip install ai-input-output

# Or install from source
cd /path/to/ai-input-output
pip install -e .
```

**Initialize configuration (recommended):**
```bash
ai init
# Creates ~/.config/ai-input-output/config.yaml with defaults
```

## Configuration

### Quick Start

```bash
# 1. Initialize config file (creates ~/.ai-input-output)
ai init

# 2. Edit the config file to set your preferred model
# Examples:
#   model: ollama/llama3.2              # Local Ollama
#   model: openai/gpt-4                 # OpenAI
#   model: anthropic/claude-3-5-sonnet  # Anthropic
```

### Configuration Priority

Settings are loaded in this order (highest priority first):

1. **CLI arguments**: `ai -m ollama/mistral`
2. **Environment variables**: `export AIO_MODEL_NAME="ollama/llama3.2"`
3. **Config file**: `~/.ai-input-output`
4. **Defaults**: Built-in defaults from Pydantic model

### Ollama Setup

**Install Ollama:**
```bash
# macOS/Linux
curl -fsSL https://ollama.com/install.sh | sh

# Pull a model
ollama pull llama3.2
ollama pull mistral
```

**Configure ai to use Ollama:**
```bash
# Option 1: Use config file (recommended)
ai init
# Then edit ~/.ai-input-output:
#   model: ollama/llama3.2

# Option 2: Environment variable
export AIO_MODEL_NAME="ollama/llama3.2"

# Option 3: CLI argument
ai -m ollama/mistral "your question"
```

### Other Providers

Configure API keys according to your LLM provider. See [LiteLLM documentation](https://docs.litellm.ai/docs/providers) for provider-specific setup.

**OpenRouter:**
```bash
export OPENROUTER_API_KEY="sk-or-v1-..."
# In ~/.ai-input-output: model: openrouter/anthropic/claude-sonnet-4.5
```

**OpenAI:**
```bash
export OPENAI_API_KEY="sk-..."
# In ~/.ai-input-output: model: openai/gpt-4
```

**Anthropic:**
```bash
export ANTHROPIC_API_KEY="sk-ant-..."
# In ~/.ai-input-output: model: anthropic/claude-3-5-sonnet-20241022
```

### Reasoning Models

Some models support extended thinking/reasoning (e.g., DeepSeek, Qwen). You can control the reasoning effort:

```bash
# In config file
reasoning_effort: medium  # Options: low, medium, high

# Or via environment variable
export AIO_REASONING_EFFORT="high"

# Or via CLI
ai -r high "complex problem requiring deep thinking"
```

**Reasoning display:**
- Thinking/reasoning content appears in **dim gray** as it streams
- Regular response content appears in normal color
- Helps distinguish internal reasoning from final answers

**Supported models:**
- `deepseek/deepseek-reasoner`
- `openrouter/qwq-32b-preview`
- Any model with reasoning capabilities via LiteLLM

### Config File

**Location:** `~/.ai-input-output` (YAML format)

**Commands:**
- `ai init` - Create config file with defaults
- `ai init --force` - Overwrite existing config file

**Available settings:** See `config.example.yaml` in the repository or run `ai init` to see defaults.

## Usage

### Basic Usage

```bash
# Interactive mode - generates commands for actionable tasks
ai "give me the biggest file in the current directory"

# With file input (stays interactive)
ai -f error.log "extract and count unique errors"

# Piped input (auto non-interactive) - provides direct analysis
cat error.log | ai "explain these errors to me"

# No question - auto-summarizes
ai -f config.yaml
```

**Key difference:** Piping automatically triggers non-interactive mode and exits after response. Use `-f` to maintain interactive session.

### Interactive Mode

**Safety:** Commands are NEVER auto-executed. You must explicitly execute them.

**Three ways to execute commands:**

1. **AI-suggested commands** (type `1`, `2`, or `3`):
```bash
ai "find python files modified today"
# AI responds with numbered commands:
#   1: find . -name "*.py" -mtime -1

> 1  # Type number to execute
# Output shown to you and sent back to AI for analysis
```

2. **Direct shell commands** (prefix with `!`):
```bash
> !ls -la *.py
# Executes immediately, output sent to AI for context

> !pwd
# Execute any shell command directly
```

3. **LLM instructions** (everything else):
```bash
> explain the last output
# Natural language sent to AI for processing

> what's the largest file?
# AI will generate new commands based on context
```

**Other commands:**
- `/reset` - Clear conversation history
- `Ctrl+C` twice (within 2 seconds) - Exit

**Exit:** Press `Ctrl+C` twice (within 2 seconds).

### Options

- `-m, --model MODEL`: Specify model (overrides config/env)
- `-f, --file FILE`: Read input from file (preferred over piping - keeps interactive mode)
- `-n, --lines N`: Max lines for truncation (applies to input buffers and command outputs)
- `-c, --chars N`: Max chars for truncation (applies to input buffers and command outputs)
- `-r, --reasoning-effort LEVEL`: Reasoning effort: low, medium, high (for reasoning models)
- `-e, --exec`: Force exec mode (one-shot, non-interactive)
- `-d, --dry-run`: Show API call parameters without executing

## Examples

### Quick Reference: Command Types

| Input | Action |
|-------|--------|
| `1`, `2`, `3` | Execute AI-suggested commands |
| `!ls -la` | Execute shell command directly |
| `find large files` | Send to AI (generates commands) |
| `/reset` | Clear conversation history |

### Tasks → Commands Generated

```bash
# System tasks - generates executable commands
ai "give me the biggest file in the current directory"
ai "find all python files modified today"
ai "show me disk usage by directory"

# File operations - generates processing commands
ai "find and compress all jpg files larger than 10MB from last week"
ai -f data.json "pretty print this json"

# Text processing - generates transformation commands
cat data.csv | ai "convert to JSON format"
ai -f error.log "extract and count unique errors"
```

### Analysis → Direct Answers

```bash
# Explains provided content
cat error.log | ai "explain these errors to me"
git diff | ai "review this change"
docker logs app | ai "why did this crash?"

# Summarizes when no question provided
ai -f /var/log/app.log
cat config.yaml | ai

# Conceptual questions
ai "what is a race condition?"
ai "why might my application be slow?"
```

## How It Works

1. **Input**: Text via stdin (piped), file (`-f`), or direct instruction
2. **Mode**: Piping auto-triggers non-interactive; `-f` stays interactive
3. **Smart Response**:
   - **Actionable tasks** (find, show, convert) → Generates bash commands
   - **Analysis requests** on provided content → Direct answers
   - **No question** → Summarizes content
4. **Execution**: You manually run commands by typing their number (interactive mode only)
5. **Truncation**: Inputs/outputs truncated for token efficiency (configurable via `-n`/`-c`)

## Buffer Management

- **Unified truncation**: Single `max_lines` and `max_chars` setting applies to both:
  - Input buffers (stdin/file content)
  - Command output sent back to LLM
- **Defaults**: 100 lines / 4000 chars (configurable via `-n`/`-c` or config file)
- **What you see**: Full output is displayed to you
- **What LLM sees**: Truncated version to manage token usage
- **Tip**: Increase limits in config file for working with larger outputs

## Command Format

```bash
# Explanation of what this does
command --with arguments
```

Commands include explanations and are executed exactly as shown when you type their number.

## Best Practices

- **Be specific**: "find large files" → "find files larger than 100MB in /var"
- **Use `-f` over piping**: Keeps interactive mode for follow-up questions
- **Action verbs for commands**: "show", "find", "convert" → generates commands
- **Analysis language for explanations**: "explain", "why", "summarize" → direct answers
- **Review before executing**: Commands require your confirmation

## Troubleshooting

**Commands not generated?** Use action verbs: "give me", "show me", "find", "convert"

**Output too long?** Adjust with `-n` and `-c` flags, or pipe through `head`/`tail`

**API errors?** Check your API key is set and has credits; try different model with `-m`

## License

MIT

## Contributing

Contributions welcome! Feel free to open issues or submit pull requests.
