Metadata-Version: 2.4
Name: alfie-cli
Version: 0.2.0
Summary: Artificial Lifeform Intelligent Entity — terminal-native AI agent orchestrator
Author: P-Typed Research Labs
License: MIT
License-File: LICENSE
Keywords: agent,ai,automation,cli,orchestrator,tmux
Classifier: Development Status :: 2 - Pre-Alpha
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Software Development :: Libraries
Classifier: Topic :: System :: Systems Administration
Requires-Python: >=3.11
Requires-Dist: aiosqlite>=0.19.0
Requires-Dist: libtmux>=0.31.0
Requires-Dist: openai>=1.0.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: python-docx>=1.1.0
Requires-Dist: python-dotenv>=1.0.0
Requires-Dist: rich>=13.0.0
Requires-Dist: tomli>=2.0.0; python_version < '3.11'
Requires-Dist: typer>=0.9.0
Provides-Extra: dev
Requires-Dist: pytest-asyncio>=0.21.0; extra == 'dev'
Requires-Dist: pytest>=7.0.0; extra == 'dev'
Requires-Dist: ruff>=0.1.0; extra == 'dev'
Description-Content-Type: text/markdown

﻿<div align="center">

# ALFIE CLI

**Artificial Lifeform Intelligent Entity**

*A terminal-native AI agent orchestrator that plans, executes, monitors, and self-heals shell-level tasks using LLM reasoning.*

[![PyPI version](https://img.shields.io/pypi/v/alfie-cli.svg)](https://pypi.org/project/alfie-cli/)
[![Python 3.11+](https://img.shields.io/badge/python-3.11%2B-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)

</div>

---

## What is ALFIE?

ALFIE bridges the gap between **AI that talks** and **AI that works**. Instead of just generating code or giving advice, ALFIE takes control of your terminal — planning multi-step shell workflows as directed acyclic graphs (DAGs), executing them with real process management, monitoring output in real time, and recovering from failures automatically.

**Give it an intent in plain English — it breaks it into tasks, orders them by dependencies, runs them in parallel where possible, and reports back.**

### Key Features

- **Natural Language → Execution Plans** — Describe what you want; ALFIE's LLM planner generates a fully structured DAG of shell commands  
- **DAG Scheduler** — Topological sorting, dependency tracking, parallel execution across a pane pool  
- **Multi-Model Support** — Works with OpenAI, Anthropic, and Google models via the Vercel AI Gateway  
- **Interactive Chat** — Multi-turn REPL with conversation memory and auto-execution of detected plans  
- **Watcher Engine** — Real-time pane monitoring with state classification (idle / running / error / stuck / interactive)  
- **Safety Guard** — Command blocklist, confirmation prompts for destructive operations, configurable safety policies  
- **OS-Aware** — Detects Windows/Linux/macOS and generates platform-appropriate commands (PowerShell on Windows, bash on *nix)  
- **Session Memory** — Persistent conversation history with per-session storage, trimming, and recall  
- **Dry Run Mode** — Validate and safety-check plans without executing anything  
- **Rich Terminal UI** — Beautiful tables, panels, spinners, and live-updating displays via Rich  

---

## Installation

```bash
pip install alfie-cli
```

Or install from source:

```bash
git clone https://github.com/p-typed/alfie-cli.git
cd alfie-cli
pip install -e ".[dev]"
```

### Requirements

- **Python 3.11+**
- An API key for the [Vercel AI Gateway](https://sdk.vercel.ai/) (supports OpenAI, Anthropic, Google models)

### Configuration

Set your API key as an environment variable:

```bash
# PowerShell (Windows)
$env:AI_GATEWAY_API_KEY = "your-api-key"

# Bash / Zsh (Linux / macOS)
export AI_GATEWAY_API_KEY="your-api-key"
```

Or create a `.env` file in your project root:

```
AI_GATEWAY_API_KEY=your-api-key
```

---

## Quick Start

```bash
# Check ALFIE is installed
alfie version

# Verify system requirements
alfie doctor

# Run a task from natural language
alfie run "create a hello.txt file that says hello world"

# Run a multi-step task
alfie run "find all python files, count lines of code, and write a summary to report.txt"

# Use a specific model
alfie run "list system info" --model anthropic/claude-sonnet-4-20250514

# Dry-run to validate without executing
alfie run "delete temp files" --dry-run
```

---

## Commands

### `alfie run`

Execute a task from natural language or a JSON plan file.

```bash
# Natural language intent
alfie run "install dependencies and run tests"

# From a saved plan file
alfie run "deploy" --plan ./my-plan.json

# Dry-run mode — validate safety without executing
alfie run "rm -rf /tmp/old" --dry-run

# Specify a model
alfie run "gather system specs" --model google/gemini-2.5-flash
```

**How it works:**
1. Your intent is sent to the LLM planner
2. The planner returns a structured DAG of tasks with dependencies
3. The DAG scheduler topologically sorts tasks and executes them layer by layer
4. Results are displayed in a rich summary table

### `alfie chat`

Interactive multi-turn conversation with auto-execution of plans.

```bash
# Start a new chat session
alfie chat

# Resume a previous session
alfie chat --session abc123

# Use a different model
alfie chat --model anthropic/claude-sonnet-4-20250514
```

**Chat commands:**
| Command  | Description                    |
|----------|--------------------------------|
| `/clear` | Clear conversation history     |
| `/info`  | Show session info              |
| `/exit`  | Quit (or press Ctrl+C)         |

When you ask ALFIE to *do* something in chat, it automatically generates and executes a plan — no extra confirmation needed.

### `alfie memory`

Manage conversation memory across sessions.

```bash
# List all saved sessions
alfie memory list

# Show a specific session
alfie memory show <session-id>

# Delete a session
alfie memory delete <session-id>

# Clear all sessions
alfie memory clear
```

### `alfie watch`

Monitor a running tmux session in real time.

```bash
# Watch a tmux session
alfie watch my-session

# Custom poll interval and timeout
alfie watch my-session --poll 5 --timeout 600
```

### `alfie doctor`

Check system requirements and configuration.

```bash
alfie doctor
```

Shows status of Python version, tmux availability, API key configuration, and more.

### `alfie config`

Display the current configuration as JSON.

```bash
alfie config
```

### Other Commands

| Command          | Description                              |
|------------------|------------------------------------------|
| `alfie version`  | Show ALFIE version                       |
| `alfie status`   | Check status of current session          |
| `alfie kill`     | Emergency stop — kill all running tasks  |
| `alfie history`  | Show past sessions                       |
| `alfie logs`     | Tail audit log                           |

---

## Supported Models

ALFIE works with any model available through the Vercel AI Gateway. Tested models include:

| Provider   | Model                           | Flag                                     |
|------------|---------------------------------|------------------------------------------|
| OpenAI     | GPT-4o Mini *(default)*         | `--model gpt-4o-mini`                    |
| OpenAI     | GPT-4o                          | `--model gpt-4o`                         |
| Anthropic  | Claude Sonnet 4                 | `--model anthropic/claude-sonnet-4-20250514`     |
| Google     | Gemini 2.5 Pro                  | `--model google/gemini-2.5-pro`          |
| Google     | Gemini 2.5 Flash                | `--model google/gemini-2.5-flash`        |
| Google     | Gemini 2.0 Flash                | `--model google/gemini-2.0-flash`        |

---

## Python API

ALFIE can be used as a library in your own Python code:

```python
from alfie.planner import Planner
from alfie.scheduler.dag import TaskScheduler, PanePool, topological_sort
from alfie.models import Plan

# Generate a plan from natural language
planner = Planner(model="gpt-4o-mini")
plan = planner.generate("install numpy and run a quick benchmark")

# Inspect the plan
for task in plan.tasks:
    print(f"{task.id}: {task.cmd} (depends on: {task.depends_on})")

# Use the DAG scheduler
layers = topological_sort(plan.tasks)
print(f"Execution will proceed in {len(layers)} layers")
```

### Async Support

```python
import asyncio
from alfie.planner import Planner

async def main():
    planner = Planner(model="gpt-4o-mini")
    plan = await planner.agenerate("check disk usage and memory stats")
    for task in plan.tasks:
        print(f"{task.id}: {task.cmd}")

asyncio.run(main())
```

### Memory Store

```python
from alfie.memory.store import MemoryStore

# Create or resume a session
mem = MemoryStore(session_id="my-session")
mem.add("user", "Hello ALFIE")
mem.add("assistant", "Hello! How can I help?")

# Get messages for LLM context
messages = mem.get_context_messages(system_prompt="You are ALFIE.")

# List all sessions
for session in mem.list_sessions():
    print(session)
```

---

## Architecture

```
alfie/
├── cli.py              # Typer CLI — all user-facing commands
├── config.py           # Pydantic configuration (TOML-backed)
├── models.py           # Domain models — Task, Plan, Session, etc.
├── planner/
│   ├── base.py         # System prompt + JSON schema for structured output
│   ├── client.py       # OpenAI SDK wrapper → Vercel AI Gateway
│   └── planner.py      # LLM planning engine with retry + DAG validation
├── scheduler/
│   └── dag.py          # Topological sort, PanePool, TaskScheduler
├── watcher/
│   └── engine.py       # Real-time pane monitoring + state classification
├── memory/
│   ├── store.py        # JSON-backed conversation persistence
│   └── prompts.py      # Chat system prompt (OS-aware)
├── events/
│   └── bus.py          # Pub/sub event bus
├── safety/
│   └── guard.py        # Command validation + blocklist
└── tmux/
    └── session.py      # tmux session/pane wrappers
```

### Execution Flow

```
User Intent (natural language)
       │
       ▼
┌─────────────┐
│  LLM Planner │  ← Vercel AI Gateway (OpenAI / Anthropic / Google)
└──────┬──────┘
       │  structured JSON plan (DAG)
       ▼
┌─────────────┐
│  Safety Guard │  ← blocklist check, confirmation prompts
└──────┬──────┘
       │
       ▼
┌──────────────┐
│ DAG Scheduler │  ← topological sort → layer-by-layer execution
└──────┬───────┘
       │  parallel task dispatch
       ▼
┌──────────────┐
│   Executor    │  ← subprocess (PowerShell on Windows, bash on *nix)
└──────┬───────┘
       │  stdout/stderr
       ▼
┌──────────────┐
│   Watcher     │  ← state classification, timeout detection, recovery
└──────────────┘
```

---

## Development

```bash
# Clone the repo
git clone https://github.com/p-typed/alfie-cli.git
cd alfie-cli

# Create a virtual environment
python -m venv venv
venv\Scripts\activate  # Windows
# source venv/bin/activate  # Linux/macOS

# Install in development mode
pip install -e ".[dev]"

# Run tests
pytest

# Lint
ruff check src/ tests/
```

### Running Tests

```bash
# Run all 172 tests
pytest

# Run with verbose output
pytest -v

# Run a specific test file
pytest tests/test_scheduler.py

# Run a specific test
pytest tests/test_planner.py -k "test_generate_simple"
```

---

## Roadmap

- [x] **Phase 0** — Foundation (CLI, config, models, events, safety)
- [x] **Phase 1** — Watcher Engine (real-time pane monitoring)
- [x] **Phase 2** — DAG Scheduler (topological sort, parallel execution)
- [x] **Phase 3** — LLM Planner (Vercel AI Gateway integration)
- [x] **Memory & Chat** — Persistent conversations + interactive REPL
- [ ] **Phase 4** — Real tmux executor (full process management)
- [ ] **Phase 5** — Persistent storage (aiosqlite session history)
- [ ] **Phase 6** — Re-planning (automatic failure recovery via LLM)
- [ ] **Phase 7** — Polish (documentation, packaging, CI/CD)

---

## License

MIT — see [LICENSE](LICENSE).

---

<div align="center">

**Built by [P-Typed Research Labs](https://github.com/p-typed)**

*ALFIE doesn't just plan — it executes.*

</div>
