Metadata-Version: 2.4
Name: agv
Version: 0.1.0
Summary: AgentEnv Cloud CLI and Python SDK for sandboxes, notebooks, clusters, and AI workloads
Project-URL: Homepage, https://agentenv.io
Project-URL: Documentation, https://github.com/agentenv/monorepo/tree/main/mintlify_docs
Project-URL: Repository, https://github.com/agentenv/monorepo
Project-URL: Issues, https://github.com/agentenv/monorepo/issues
Author-email: AgentEnv <support@agentenv.io>
Keywords: agentenv,cli,cloud,sandbox,sdk
Classifier: Development Status :: 3 - Alpha
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: System :: Distributed Computing
Requires-Python: >=3.10
Requires-Dist: authlib>=1.3.0
Requires-Dist: httpx>=0.27.0
Requires-Dist: keyring>=24.0.0
Requires-Dist: pydantic-settings>=2.0
Requires-Dist: pydantic>=2.0
Requires-Dist: python-dotenv>=1.0.0
Requires-Dist: pyyaml>=6.0
Requires-Dist: rich>=13.0.0
Requires-Dist: typer>=0.12.0
Provides-Extra: dev
Requires-Dist: mypy>=1.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.21; extra == 'dev'
Requires-Dist: pytest-cov>=4.0; extra == 'dev'
Requires-Dist: pytest>=7.0; extra == 'dev'
Requires-Dist: ruff>=0.1.0; extra == 'dev'
Provides-Extra: ray
Requires-Dist: ray<2.10,>=2.9.3; extra == 'ray'
Provides-Extra: spark
Requires-Dist: grpcio-status>=1.48.1; extra == 'spark'
Requires-Dist: grpcio>=1.48.1; extra == 'spark'
Requires-Dist: pandas>=2.2.0; extra == 'spark'
Requires-Dist: pyarrow>=15.0.0; extra == 'spark'
Requires-Dist: pyspark<3.6,>=3.5.1; extra == 'spark'
Requires-Dist: zstandard>=0.25.0; extra == 'spark'
Description-Content-Type: text/markdown

# AgentEnv CLI

A command-line interface for AgentEnv Cloud - manage containers, sandboxes, and more.

## Installation

```bash
pip install agv
```

This package publishes the Python module as `agentenv` and installs three CLI entrypoints:

- `agv`: package-aligned short command
- `agentenv`: explicit command name for documentation and shell usage
- `av`: short alias kept for existing workflows

The examples below keep using `av`, but `agentenv` is equivalent:

```bash
agv version
agentenv version
av version
python -m agentenv version
```

## Quick Start

```bash
# Authenticate with an API key
av login --api-key sk_live_xxxxx

# Set default sandbox type
av set type xl

# Run a Python sandbox
av run -- python3 -m http.server

# List sandboxes
av ls

# View logs
av logs -f <sandbox-id>
```

## Ray / Spark Clusters

The CLI now includes cluster lifecycle commands, and the Python SDK also exposes higher-level helpers that can provision Ray/Spark clusters and connect to the cluster head over a direct public endpoint.

### Cluster CLI

```bash
av cluster ray 4x2xH100 --workspace <workspace-id>
av cluster spark 2xH100 --workspace <workspace-id> --wait
av cluster ls
av cluster inspect <cluster-id>
av cluster stop <cluster-id>
```

Direct-connect requires the scheduler to be able to resolve the head hypervisor public IP and (best-effort) open inbound security group rules for `allow_cidr`. If that integration is not configured, the cluster may start but `metadata.rayAddress` / `metadata.sparkRemote` will be missing and auto-connect will fail.

Install optional client deps if you want auto-connect:

```bash
pip install "agv[ray]"
pip install "agv[spark]"
```

The `spark` extra installs PySpark plus Spark Connect runtime deps (pandas/pyarrow/grpcio/grpcio-status/zstandard).
Keep the PySpark major/minor in sync with the Spark server version (defaults to Spark 3.5.x).

### Ray (Ray Client)

```python
import agentenv as av

# Shape formats:
# - "4x2xH100" => head + 4 workers, 2x H100 per worker
# - "2xH100"   => single node (head only), 2x H100 on the head
cluster = av.ray_init("4x2xH100", allow_cidr="1.2.3.4/32")

import ray

@ray.remote
def f(x):
    return x + 1

print(ray.get(f.remote(1)))

cluster.close(stop_cluster=True)
```

### Spark (Spark Connect)

```python
import agentenv as av

spark = av.spark_init("4x2xH100", allow_cidr="1.2.3.4/32")
print(spark.range(10).count())

spark.close(stop_cluster=True)
```

Notes:
- `spark.remote` is a standard Spark Connect URL that includes an auth param: `sc://<host>:<port>/;x-api-key=<token>`.
- The Spark head image must include the gRPC auth proxy dependency (`haproxy`). Use `Dockerfile.spark` and `scripts/ci/build-public-spark-image.sh`, then set `SPARK_IMAGE_DEFAULT` on the api-server (or pass `image=` explicitly).

## Commands

### Authentication

```bash
av login --api-key <key>           # Login with API key
av login                           # Browser-based login
av login --username <user> --password <pass>  # Local dev login
av logout                          # Logout
av auth status                     # Show auth status
av auth create-key "My CLI Key"    # Create API key
av auth list-keys                  # List API keys
```

### Sandbox Management

```bash
av run --type xl -- python3 -m http.server    # Create and run
av run --expose 8080:http -- node server.js   # With port exposed
av ls                                         # List sandboxes
av inspect <id>                              # Show details
av logs -f <id>                              # Follow logs
av stop <id>                                 # Stop sandbox
av rm <id>                                   # Delete sandbox
```

### Preset Types

| Type  | CPU    | Memory |
|-------|--------|--------|
| micro | 500    | 512 MB |
| small | 2000   | 4 GB   |
| medium| 4000   | 8 GB   |
| large | 8000   | 16 GB  |
| xl    | 16000  | 32 GB  |

### Snapshots

```bash
av snapshot create <sandbox-id> --name "My Environment"
av snapshot ls
av snapshot restore <snapshot-id>
```

### Apps

```bash
av app create --name web --port 8080 --min 0 --max 3
av app create --name api --port 8080 --ready http_health --health-path /health
av app deploy web --snapshot <snapshot-id>
av app ls
av app inspect <app-id-or-slug>
av app logs <app-id-or-slug>
av app rm <app-id-or-slug>
```

Notes:
- Ready types: `port_accessible`, `http_health`.
- If you specify `http_health` without `--health-path`, the CLI defaults to `/health`.

### av.function (Single-node remote function)

```python
import agentenv as av

@av.function("small", image="python:3.11-slim")
def add(x, y):
    return x + y

print(add(2, 3))
```

Using an ImageBuilder:

```python
import agentenv as av

builder = av.py().python_packages(["numpy"])

@av.function("small", image=builder)
def norm(x):
    import numpy as np
    return float(np.linalg.norm(x))

print(norm([3, 4]))
```

Notes:
- Only single-node specs are supported (preset types like `small`, or `cpu:mem`).
- The function must be importable in the sandbox image (no nested or `__main__` functions).

### Browser Sessions

```bash
av browser create                    # Create browser session
av browser create --screen-width 1920 --screen-height 1080 --stealth
av browser create --profile-mode ephemeral --rrweb
av browser ls
av browser inspect <id>
```

### Notebook Sessions

```bash
av notebook session create --workspace <workspace-id>
av notebook session create --workspace <workspace-id> --type xl
av notebook session create --workspace <workspace-id> --image docker://quay.io/jupyter/datascience-notebook:notebook-7.5.5
av notebook session create --workspace <workspace-id> --storage-mode persistent --idle-ttl 600
av notebook session list
av notebook session get <id>
```

### API Coverage

The CLI focuses on common day-to-day workflows. The Mintlify site and checked-in OpenAPI schema document the full `api-server` surface, including operational resources such as browser profiles, managed agents, proxy usage, captcha usage, and webhook ingress.

### Workspaces

```bash
av workspace create "My Workspace"
av workspace ls
av workspace use <workspace-id>
av workspace secret-set <ws-id> KEY value
```

### Files

```bash
av file upload ./myfile.txt
av file download /remote.txt ./local.txt
av file ls
```

### Workflows

```bash
av workflow ls
av workflow create "Daily Sync" --file workflow.json
av workflow inspect <workflow-id>
av workflow update <workflow-id> --file workflow.json
av workflow deploy <workflow-id>
av workflow undeploy <workflow-id>
av workflow execute <workflow-id> --input '{"customerId":"cus_123"}'
av workflow execute-in-memory --workspace-id <workspace-id> --file workflow.json
av workflow executions <workflow-id>
av workflow execution <workflow-id> <execution-id>
av workflow cancel <workflow-id> <execution-id>
av workflow metrics <workflow-id>
av workflow metrics-timeseries <workflow-id> --interval day
av workflow node-definitions
av workflow node-definition webhook
av workflow plugins
```

### Billing

```bash
av balance                           # Show balance
av billing history                   # Transaction history
```

### Configuration

```bash
av set type xl                       # Set default type
av set image python:3.11             # Set default image
av set workspace <workspace-id>      # Set default workspace
av config show                       # Show configuration
```

## AI Gateway

The CLI includes full support for the AgentEnv AI Gateway:

```bash
# Chat with AI models
av ai chat "Hello!" --model gpt-4

# Manage providers
av ai upstreams list
av ai pools create --name production

# See the dedicated AI Gateway guide for complete documentation
```

See the AI Gateway guide:
<https://github.com/agentenv/monorepo/blob/main/cli/README-AI-GATEWAY.md>

## Configuration

The CLI reads configuration from multiple sources (in priority order):

1. Command-line flags (`--api-url`, `--workspace`, etc.)
2. Environment variables (`AGENTENV_API_URL`, `AGENTENV_API_KEY`, etc.)
3. Config file (`~/.agentenv/config.yaml`)
4. Project `.env` file
5. Built-in defaults

### Config File (~/.agentenv/config.yaml)

```yaml
api_url: http://localhost:3000
workspace: wk_abc123

defaults:
  type: small
  image: docker.io/library/python:3.11-slim
  cpu: 2000
  memory: 4096
  region: us-east-1
```

Use the API root as `api_url` without `/v1`. If you do pass a `/v1` suffix, the CLI normalizes it automatically.

## Release

Build release artifacts locally:

```bash
cd cli
uv build
python3 -m twine check dist/*
```

Release flow:

```bash
# 1. Bump the package version in cli/src/agentenv/_version.py

# 2. Sanity-check the version metadata
cd cli
python3 scripts/check_release_version.py --print-version

# 3. Build and validate the distributions
uv build
python3 -m twine check dist/*

# 4. Tag the release with the enforced format
git tag "agv-v$(python3 scripts/check_release_version.py --print-version)"
git push origin --tags
```

Upload them manually when you have PyPI credentials:

```bash
python3 -m twine upload dist/*
```

The repository also includes a GitHub Actions workflow for trusted publishing to PyPI, and it rejects tags whose version does not match the package version.
