Metadata-Version: 2.4
Name: agv
Version: 0.3.2
Summary: AgentEnv Cloud CLI and Python SDK for sandboxes, notebooks, clusters, and AI workloads
Project-URL: Homepage, https://agentenv.io
Project-URL: Documentation, https://github.com/agentenv/monorepo/tree/main/mintlify_docs
Project-URL: Repository, https://github.com/agentenv/monorepo
Project-URL: Issues, https://github.com/agentenv/monorepo/issues
Author-email: AgentEnv <support@agentenv.io>
Keywords: agentenv,cli,cloud,sandbox,sdk
Classifier: Development Status :: 3 - Alpha
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: System :: Distributed Computing
Requires-Python: >=3.10
Requires-Dist: authlib>=1.3.0
Requires-Dist: httpx>=0.27.0
Requires-Dist: keyring>=24.0.0
Requires-Dist: pydantic-settings>=2.0
Requires-Dist: pydantic>=2.0
Requires-Dist: python-dotenv>=1.0.0
Requires-Dist: pyyaml>=6.0
Requires-Dist: rich>=13.0.0
Requires-Dist: typer>=0.12.0
Provides-Extra: dev
Requires-Dist: mypy>=1.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.21; extra == 'dev'
Requires-Dist: pytest-cov>=4.0; extra == 'dev'
Requires-Dist: pytest>=7.0; extra == 'dev'
Requires-Dist: ruff>=0.1.0; extra == 'dev'
Provides-Extra: ray
Requires-Dist: ray<2.10,>=2.9.3; extra == 'ray'
Provides-Extra: spark
Requires-Dist: grpcio-status>=1.48.1; extra == 'spark'
Requires-Dist: grpcio>=1.48.1; extra == 'spark'
Requires-Dist: pandas>=2.2.0; extra == 'spark'
Requires-Dist: pyarrow>=15.0.0; extra == 'spark'
Requires-Dist: pyspark<3.6,>=3.5.1; extra == 'spark'
Requires-Dist: zstandard>=0.25.0; extra == 'spark'
Description-Content-Type: text/markdown

# AgentEnv CLI

A command-line interface for AgentEnv Cloud - manage containers, sandboxes, and more.

## Installation

```bash
pip install agv
```

This package publishes the Python module as `agentenv` and installs two CLI entrypoints:

- `agv`: package-aligned short command
- `agentenv`: explicit command name for documentation and shell usage

The examples below keep using `agv`, but `agentenv` is equivalent:

```bash
agv version
agentenv version
python -m agentenv version
```

## Quick Start

```bash
# Authenticate with an API key
agv login --api-key sk_live_xxxxx

# Set default sandbox type
agv set type xl

# Run a Python sandbox
agv run -- python3 -m http.server

# List sandboxes
agv ls

# View logs
agv logs -f <sandbox-id>
```

## Ray / Spark Clusters

The CLI now includes cluster lifecycle commands, and the Python SDK also exposes higher-level helpers that can provision Ray/Spark clusters and connect to the cluster head over a direct public endpoint.

### Cluster CLI

```bash
agv cluster ray 4x2xH100 --workspace <workspace-id>
agv cluster spark 2xH100 --workspace <workspace-id> --wait
agv cluster ls
agv cluster inspect <cluster-id>
agv cluster stop <cluster-id>
```

Direct-connect requires the scheduler to be able to resolve the head hypervisor public IP and (best-effort) open inbound security group rules for `allow_cidr`. If that integration is not configured, the cluster may start but `metadata.rayAddress` / `metadata.sparkRemote` will be missing and auto-connect will fail.

Install optional client deps if you want auto-connect:

```bash
pip install "agv[ray]"
pip install "agv[spark]"
```

The `spark` extra installs PySpark plus Spark Connect runtime deps (pandas/pyarrow/grpcio/grpcio-status/zstandard).
Keep the PySpark major/minor in sync with the Spark server version (defaults to Spark 3.5.x).

### Ray (Ray Client)

```python
import agentenv as agv

# Shape formats:
# - "4x2xH100" => head + 4 workers, 2x H100 per worker
# - "2xH100"   => single node (head only), 2x H100 on the head
cluster = agv.ray_init("4x2xH100", allow_cidr="1.2.3.4/32")

import ray

@ray.remote
def f(x):
    return x + 1

print(ray.get(f.remote(1)))

cluster.close(stop_cluster=True)
```

### Spark (Spark Connect)

```python
import agentenv as agv

spark = agv.spark_init("4x2xH100", allow_cidr="1.2.3.4/32")
print(spark.range(10).count())

spark.close(stop_cluster=True)
```

Notes:
- `spark.remote` is a standard Spark Connect URL that includes an auth param: `sc://<host>:<port>/;x-api-key=<token>`.
- The Spark head image must include the gRPC auth proxy dependency (`haproxy`). Use `Dockerfile.spark` and `scripts/ci/build-public-spark-image.sh`, then set `SPARK_IMAGE_DEFAULT` on the api-server (or pass `image=` explicitly).

## Commands

### Authentication

```bash
agv login --api-key <key>           # Login with API key
agv login                           # Browser-based login
agv login --username <user> --password <pass>  # Local dev login
agv logout                          # Logout
agv auth status                     # Show auth status
agv auth create-key "My CLI Key"    # Create API key
agv auth list-keys                  # List API keys
```

### Sandbox Management

```bash
agv run --type xl -- python3 -m http.server    # Create and run
agv run --expose 8080:http -- node server.js   # With port exposed
agv ls                                         # List sandboxes
agv inspect <id>                              # Show details
agv logs -f <id>                              # Follow logs
agv stop <id>                                 # Stop sandbox
agv rm <id>                                   # Delete sandbox
```

### Preset Types

| Type  | CPU    | Memory |
|-------|--------|--------|
| micro | 500    | 512 MB |
| small | 2000   | 4 GB   |
| medium| 4000   | 8 GB   |
| large | 8000   | 16 GB  |
| xl    | 16000  | 32 GB  |

### Snapshots

```bash
agv snapshot create <sandbox-id> --name "My Environment"
agv snapshot ls
agv snapshot restore <snapshot-id>
```

### Apps

```bash
agv app create --name web --port 8080 --min 0 --max 3
agv app create --name api --port 8080 --ready http_health --health-path /health
agv app deploy web --snapshot <snapshot-id>
agv app ls
agv app inspect <app-id-or-slug>
agv app logs <app-id-or-slug>
agv app rm <app-id-or-slug>
```

Notes:
- Ready types: `port_accessible`, `http_health`.
- If you specify `http_health` without `--health-path`, the CLI defaults to `/health`.

### agv.function (Single-node remote function)

```python
import agentenv as agv

@agv.function("small", image="python:3.11-slim")
def add(x, y):
    return x + y

print(add(2, 3))
```

Using an ImageBuilder:

```python
import agentenv as agv

builder = agv.py().python_packages(["numpy"])

@agv.function("small", image=builder)
def norm(x):
    import numpy as np
    return float(np.linalg.norm(x))

print(norm([3, 4]))
```

Notes:
- Only single-node specs are supported (preset types like `small`, or `cpu:mem`).
- The function must be importable in the sandbox image (no nested or `__main__` functions).

### Browser Sessions

```bash
agv browser create                    # Create browser session
agv browser create --screen-width 1920 --screen-height 1080 --stealth
agv browser create --profile-mode ephemeral --rrweb
agv browser ls
agv browser inspect <id>
```

### Managed Agents

```bash
agv managed-agent ls --workspace <workspace-id>
agv managed-agent create --name "Research Agent" --workspace <workspace-id> --upstream-id <upstream-id>
agv managed-agent create --name "Research Agent" --workspace <workspace-id> --upstream-id <upstream-id> --image docker://registry.example/tintin-managed-agent:latest
agv managed-agent inspect <agent-id>
agv managed-agent messages <agent-id>
agv managed-agent send <agent-id> -- "summarize the repo status"
agv managed-agent wake <agent-id>
agv managed-agent fork <agent-id>
agv managed-agent rm <agent-id> --force
```

- `agv managed-agent create --image ...` overrides the server default image for this agent.
- `agv managed-agent create` without `--image` uses the server-configured default image (`MANAGED_AGENT_IMAGE`).

### Notebook Sessions

```bash
agv notebook session create --workspace <workspace-id>
agv notebook session create --workspace <workspace-id> --type xl
agv notebook session create --workspace <workspace-id> --image docker://quay.io/jupyter/datascience-notebook:notebook-7.5.5
agv notebook session create --workspace <workspace-id> --storage-mode persistent --idle-ttl 600
agv notebook session list
agv notebook session get <id>
```

### API Coverage

The CLI focuses on common day-to-day workflows. The Mintlify site and checked-in OpenAPI schema document the full `api-server` surface, including operational resources such as proxy usage, captcha usage, and webhook ingress.

### Workspaces

```bash
agv workspace create "My Workspace"
agv workspace ls
agv workspace use <workspace-id>
agv workspace secret-set <ws-id> KEY value
```

### Files

```bash
agv file upload ./myfile.txt
agv file download /remote.txt ./local.txt
agv file ls
```

### Workflows

```bash
agv workflow ls
agv workflow create "Daily Sync" --file workflow.json
agv workflow inspect <workflow-id>
agv workflow update <workflow-id> --file workflow.json
agv workflow deploy <workflow-id>
agv workflow undeploy <workflow-id>
agv workflow execute <workflow-id> --input '{"customerId":"cus_123"}'
agv workflow execute-in-memory --workspace-id <workspace-id> --file workflow.json
agv workflow executions <workflow-id>
agv workflow execution <workflow-id> <execution-id>
agv workflow cancel <workflow-id> <execution-id>
agv workflow metrics <workflow-id>
agv workflow metrics-timeseries <workflow-id> --interval day
agv workflow node-definitions
agv workflow node-definition webhook
agv workflow plugins
```

### Billing

```bash
agv balance                           # Show balance
agv billing history                   # Transaction history
```

### Configuration

```bash
agv set type xl                       # Set default type
agv set image python:3.11             # Set default image
agv set workspace <workspace-id>      # Set default workspace
agv config show                       # Show configuration
```

## AI Gateway

The CLI includes full support for the AgentEnv AI Gateway:

```bash
# Chat with AI models
agv ai chat "Hello!" --model gpt-4

# Manage providers
agv ai upstreams list
agv ai pools create --name production

# See the dedicated AI Gateway guide for complete documentation
```

See the AI Gateway guide:
<https://github.com/agentenv/monorepo/blob/main/cli/README-AI-GATEWAY.md>

## Configuration

The CLI reads configuration from multiple sources (in priority order):

1. Command-line flags (`--api-url`, `--workspace`, etc.)
2. Environment variables (`AGENTENV_API_URL`, `AGENTENV_API_KEY`, etc.)
3. Config file (`~/.agentenv/config.yaml`)
4. Project `.env` file
5. Built-in defaults

### Config File (~/.agentenv/config.yaml)

```yaml
api_url: http://localhost:3000
workspace: wk_abc123

defaults:
  type: small
  image: docker.io/library/python:3.11-slim
  cpu: 2000
  memory: 4096
  region: us-east-1
```

Use the API root as `api_url` without `/v1`. If you do pass a `/v1` suffix, the CLI normalizes it automatically.

## Release

Build release artifacts locally:

```bash
cd cli
uv build
python3 -m twine check dist/*
```

Release flow:

```bash
# 1. Bump the package version in cli/src/agentenv/_version.py

# 2. Sanity-check the version metadata
cd cli
python3 scripts/check_release_version.py --print-version

# 3. Build and validate the distributions
uv build
python3 -m twine check dist/*

# 4. Tag the release with the enforced format
git tag "agv-v$(python3 scripts/check_release_version.py --print-version)"
git push origin --tags
```

Upload them manually when you have PyPI credentials:

```bash
python3 -m twine upload dist/*
```

The repository also includes a GitHub Actions workflow for trusted publishing to PyPI, and it rejects tags whose version does not match the package version.
