Metadata-Version: 2.4
Name: nexcode-ai
Version: 0.3.0
Summary: CLI for the Nexcode AI - configure providers and run workspace agents from your terminal
Author: Suriya
License-Expression: MIT
Keywords: ai,cli,nexcode,llm,agent
Classifier: Development Status :: 3 - Alpha
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Utilities
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: requests>=2.28
Requires-Dist: rich>=13.0
Requires-Dist: openai>=1.0.0
Dynamic: license-file

# nexcode

> **CLI for Nexcode AI** - configure providers and run workspace agents from your terminal.

[![PyPI](https://img.shields.io/pypi/v/nexcode-ai)](https://pypi.org/project/nexcode-ai/)
[![Python](https://img.shields.io/pypi/pyversions/nexcode-ai)](https://pypi.org/project/nexcode-ai/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)

## Installation

```bash
pip install nexcode-ai
```

## Quick Start

### 1. Login

```bash
nexcode login
```

You'll be prompted to configure a provider such as:
- **xAI Grok API Key** - generate one from the xAI console ([console.x.ai](https://console.x.ai))
- **Google Gemini API Key** - generate one from Google AI Studio

Credentials are saved to `~/.nexcode-ai/nexcode.json`.

### 2. Run the agent

```bash
nexcode agent "hi"
nexcode agent "What is the capital of India?"
nexcode agent "Explain this repo and suggest fixes"
nexcode agent "Fix the login bug"
nexcode agent "Run test.py and show the output"
```

The response streams directly to your terminal in real time.

### 3. Piped input

You can also pipe content directly into the CLI:

```bash
cat prompt.txt | nexcode agent "build this project"
cat log.txt | nexcode agent "analyse and fix that bug"
git diff app.py | nexcode agent "Explain what changed"
```

### 4. Use images

You can attach screenshots or reference images with `--image`. This is helpful for UI bugs, broken layouts, visual regressions, and mobile issues.

Single image:

```bash
nexcode agent --image .\bug.png "This mobile layout is broken. Please fix it."
```

Multiple images:

```bash
nexcode agent --image .\actual.png --image .\expected.png "Compare these screens and fix the mismatch."
```

Image URL:

```bash
nexcode agent --image "https://example.com/bug.png" "See this screenshot and fix the issue."
```

Repeat `--image` for each image you want to attach in the same turn.

### 5. Interactive mode

Start a workspace chat session:

```bash
nexcode agent
```

Useful interactive commands:

```text
/image <path-or-url> [message...]
/clear_images
/reset
/exit
```

Example REPL flow:

```text
agent> /image .\bug.png
agent> fix this navbar overlap on mobile
agent> /image .\expected.png compare this with the current screen and correct the spacing too
agent> /clear_images
agent> run test.py and show output
```

If you include a message after `/image`, Nexcode sends the image and message together in the same turn.

## Agent Examples

Common ways to use `nexcode agent`:

```bash
# Casual chat
nexcode agent "hi"

# Ask about the current repo
nexcode agent "Explain this codebase and suggest improvements"

# Review architecture
nexcode agent "Review this repo and find architecture issues"

# Fix a bug
nexcode agent "Fix the checkout bug on mobile Safari"

# Fix a visual bug from a screenshot
nexcode agent --image .\checkout-error.png "The checkout button is broken on mobile. Fix it."

# Use multiple screenshots for the same bug
nexcode agent --image .\actual.png --image .\expected.png "This page does not match the expected design. Fix it."

# Run a file and inspect output
nexcode agent "Run test.py and tell me what you see"

# Build something new in the current workspace
nexcode agent build "Create a landing page for an AI coding product"

# Build from a piped app prompt file
cat app_prompt.txt | nexcode agent build this project

# Build inside a new subfolder
nexcode agent --new-in landing-page "Create a marketing site for Nexcode"

# Resume the current workspace plan
nexcode agent resume

# Show detailed routing and execution logs
nexcode agent "Fix the dashboard layout bug" --verbose
```

## Commands

| Command | Description |
|---|---|
| `nexcode` | Show help |
| `nexcode help` | Show help |
| `nexcode login` | Configure a provider API key |
| `nexcode switch grok` | Switch the active provider to Grok |
| `nexcode switch gemini` | Switch the active provider to Gemini |
| `nexcode config show` | Show the active provider, model, and settings |
| `nexcode config model <name>` | Change the active provider model |
| `nexcode agent [message]` | Smart workspace agent for chat, reviews, fixes, and builds in the current folder |
| `nexcode agent --image <path-or-url> [message]` | Attach one image to the current agent turn |
| `nexcode agent --reset` | Reset current workspace context without deleting history |
| `nexcode agent --new-in <folder> [message]` | Build in a new subfolder inside the current workspace |
| `nexcode agent resume` | Resume the current workspace plan |
| `nexcode version` | Show version |

## Home Directory

Global data is stored in `~/.nexcode-ai/`:

```text
~/.nexcode-ai/
|-- nexcode.json         # Endpoint + API key (private)
`-- agent.yaml           # Global agent defaults
```

Workspace runtime state lives in the current repo's `.agent/` folder, including agent history, logs, snapshots, and optional local `agent.yaml` overrides.

Use `nexcode config show` to inspect the active provider and selected model.

## Agent Config

Agent behavior is controlled with YAML config files:

- Global defaults: `~/.nexcode-ai/agent.yaml`
- Per-workspace overrides: `.agent/agent.yaml`

Local workspace config wins over the global file, so you can keep one default setup and override it only for a specific repo.

Example:

```yaml
model: grok-4-1-fast-reasoning
max_steps: 60
temperature: 0.2
max_rejects: 8
max_step_attempts: 20
coder_context_summary_threshold: 250000
assistant_routing_strategy: heuristic
assistant_intent_mode: conservative
fix_history_summary_threshold: 500000
```

### Important options

| Key | Description |
|---|---|
| `model` | Model used for agent planning, investigation, and chat replies |
| `assistant_routing_strategy` | Main router strategy: `heuristic` or `ai_first` |
| `assistant_intent_mode` | How aggressively the assistant interprets user requests: `conservative`, `balanced`, or `autonomous` |
| `max_steps` | Maximum execution steps for long-running plans |
| `max_step_attempts` | Retry limit for an individual step |
| `max_rejects` | Maximum correction loops before stopping a failing run |
| `temperature` | Model creativity / variability |
| `coder_context_summary_threshold` | When code context gets too large, Nexcode summarizes it to stay within the context window |
| `fix_history_summary_threshold` | When assistant history gets too large, Nexcode compresses older history before injecting it into prompts |

### Routing strategy

- `heuristic` (default): fast rule-based routing first, with model help inside assistant flows
- `ai_first`: model-based routing for natural-language requests like casual chat, review, consult, implementation, or mixed prompts

Even in `ai_first` mode, risky actions still stay behind deterministic guardrails:

- `run`, `build`, `test`, and `open` requests still resolve through safe internal validation
- Nexcode does not let the model invent shell commands
- file/runtime selection stays deterministic, for example `.py -> python` and `.js -> node`

### Intent mode

- `conservative` (default): prefers asking or consulting unless the user clearly asked for code changes
- `balanced`: a middle ground between discussion and action
- `autonomous`: more willing to interpret suggestions as implementation follow-through when context supports it

### Example workspace override

Use `.agent/agent.yaml` inside a project when you want different behavior just for that repo:

```yaml
assistant_routing_strategy: ai_first
assistant_intent_mode: balanced
max_steps: 80
```

This is useful if one repo needs more natural chat routing, while your default global config stays more conservative.

## License

MIT (c) Nexcode-AI
