Metadata-Version: 2.4
Name: mimiry-cli
Version: 0.1.2
Summary: Python SDK for the Mimiry GPU Cloud API
Author: Mimiry
License-Expression: MIT
Project-URL: Documentation, https://mimiryprimary.lovable.app
Project-URL: Source, https://github.com/OTSorensen/mimiry-python-sdk
Project-URL: Bug Tracker, https://github.com/OTSorensen/mimiry-python-sdk/issues
Keywords: gpu,cloud,api,sdk,mimiry,machine-learning
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.8
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: httpx>=0.24.0
Provides-Extra: cli
Requires-Dist: typer>=0.9.0; extra == "cli"
Requires-Dist: rich>=13.0.0; extra == "cli"
Dynamic: license-file

# Mimiry Python SDK

Python client for the [Mimiry GPU Cloud API](https://mimiryprimary.lovable.app). Deploy GPU instances, submit batch jobs, and manage cloud resources programmatically.

## Installation

### Prerequisites (Linux/WSL)

Modern Linux distributions (Debian 12+, Ubuntu 23.04+, WSL) require a virtual environment due to [PEP 668](https://peps.python.org/pep-0668/):

```bash
# Create and activate a virtual environment
python3 -m venv ~/.venvs/mimiry
source ~/.venvs/mimiry/bin/activate
```

### Install from PyPI

```bash
pip install mimiry-cli          # SDK only
pip install "mimiry-cli[cli]"   # SDK + CLI
```

### Verify installation

```bash
mimiry version
# mimiry 0.1.2
```

### Install from source (development)

```bash
git clone https://github.com/OTSorensen/mimiry-python-sdk
cd mimiry-python-sdk
pip install -e ".[cli]"
```

## Quick Start

```python
from mimiry import MimiryClient

client = MimiryClient(api_key="mky_your_key_here")

# List available GPUs (defaults to Verda)
currency = "eur"
symbol = {"eur": "€", "usd": "$"}.get(currency, currency.upper())
for gpu in client.list_instance_types(currency=currency):
    print(f"{gpu['instance_type']} — {symbol}{gpu['price_per_hour']}/hr")

# List Scaleway GPUs
for gpu in client.list_instance_types(provider="scaleway"):
    print(f"{gpu['instance_type']} — €{gpu['price_per_hour']}/hr")

# Submit a job
job = client.submit_job(
    name="training-run",
    instance_type="1V100.6V",
    image="ubuntu-22.04-cuda-12.0",
    location="FIN-01",
    ssh_key_ids=["your-key-uuid"],
    startup_script="#!/bin/bash\npython train.py",
    auto_shutdown=True,
)
print(f"Job {job['id']} submitted — status: {job['status']}")
```

## Authentication

1. Create an API key from the [Mimiry Dashboard](https://mimiryprimary.lovable.app) → **API Keys**
2. Pass it to the client:

```python
client = MimiryClient(api_key="mky_your_key_here")
```

API keys require scopes for the endpoints you want to access:

| Scope | Endpoints |
|-------|-----------|
| `jobs:read` | List/get jobs |
| `jobs:write` | Submit/cancel jobs |
| `instances:read` | List GPUs, locations, availability, images, providers |
| `ssh_keys:read` | List SSH keys |
| `ssh_keys:write` | Add/delete SSH keys |
| `registry:read` | List registry credentials |
| `registry:write` | Add/delete registry credentials |

## Supported Providers

The SDK is provider-agnostic. Pass `provider="..."` to target a specific backend.

| Provider | Slug | GPU Types | Locations |
|----------|------|-----------|-----------|
| Verda | `verda` (default) | V100, A100, H100, etc. | FIN-01 (Helsinki) |
| Scaleway | `scaleway` | H100, L4, L40S, B300 | fr-par-2 (Paris), nl-ams-1 (Amsterdam), pl-waw-2 (Warsaw), + more |

> **Note:** The legacy slug `datacrunch` is still accepted as an alias for `verda` for backward compatibility.

### Scaleway Instance Types

Scaleway GPU types follow the pattern `{GPU}-{count}-{memory}`:

| Instance Type | GPU | VRAM | Example |
|---------------|-----|------|---------|
| `H100-1-80G` | 1× H100 | 80 GB | Single H100 |
| `H100-2-80G` | 2× H100 | 160 GB | Dual H100 |
| `L40S-1-48G` | 1× L40S | 48 GB | Single L40S |
| `L4-1-24G` | 1× L4 | 24 GB | Single L4 |

### Scaleway Locations

| Code | Name |
|------|------|
| `fr-par-1` | Paris 1 |
| `fr-par-2` | Paris 2 (GPU) |
| `fr-par-3` | Paris 3 |
| `nl-ams-1` | Amsterdam 1 |
| `nl-ams-2` | Amsterdam 2 |
| `pl-waw-2` | Warsaw 2 |
| `pl-waw-3` | Warsaw 3 |

## API Reference

### Jobs

```python
# List all jobs
jobs = client.list_jobs()

# Get job details
job = client.get_job("job-uuid")

# Submit a job (Verda — default)
job = client.submit_job(
    name="my-job",
    instance_type="1V100.6V",
    image="ubuntu-22.04-cuda-12.0",
    location="FIN-01",
    ssh_key_ids=["key-uuid"],
    startup_script="#!/bin/bash\nnvidia-smi",
    auto_shutdown=True,
    heartbeat_timeout_seconds=1800,  # optional, default 600
    max_runtime_seconds=7200,        # optional, no default
)

# Submit a job on Scaleway
job = client.submit_job(
    name="scaleway-training",
    instance_type="H100-1-80G",
    image="ubuntu_jammy",
    location="fr-par-2",
    ssh_key_ids=["key-uuid"],
    startup_script="#!/bin/bash\nnvidia-smi",
    provider="scaleway",
    auto_shutdown=True,
)

# Cancel a job
client.cancel_job("job-uuid")

# Wait for a job to finish (polls every 10s, timeout 1h)
result = client.wait_for_job("job-uuid", poll_interval=10, timeout=3600)

# Submit and wait in one call
result = client.submit_job_and_wait(
    name="my-job",
    instance_type="1V100.6V",
    image="ubuntu-22.04-cuda-12.0",
    location="FIN-01",
    ssh_key_ids=["key-uuid"],
    startup_script="#!/bin/bash\npython train.py",
)
```

### Streaming Logs & Metrics

Jobs automatically stream stdout/stderr to the dashboard in real-time (every 15s). You can also emit **structured metrics** (loss, accuracy, etc.) that will appear as interactive charts.

#### Emitting Metrics (File Convention)

Write JSONL to `/tmp/mimiry_metrics.jsonl` — no SDK dependency required:

```python
import json

# In your training loop:
for epoch in range(num_epochs):
    loss = train_one_epoch()
    accuracy = evaluate()
    
    # Write metrics — they appear as live charts in the dashboard
    with open("/tmp/mimiry_metrics.jsonl", "a") as f:
        f.write(json.dumps({
            "step": epoch,
            "loss": float(loss),
            "accuracy": float(accuracy),
            "learning_rate": optimizer.param_groups[0]["lr"],
        }) + "\n")
```

**Rules:**
- Each line must be valid JSON
- Include a `step` or `epoch` field for the X-axis
- All numeric fields are automatically plotted
- The agent watches the file every 10s and streams new entries to the dashboard
- No SDK import needed — works with any framework (PyTorch, TensorFlow, JAX, etc.)

#### PyTorch Example

```python
import json, torch

metrics_file = "/tmp/mimiry_metrics.jsonl"

for epoch in range(100):
    model.train()
    total_loss = 0
    for batch in train_loader:
        loss = train_step(model, batch)
        total_loss += loss.item()
    
    avg_loss = total_loss / len(train_loader)
    val_acc = evaluate(model, val_loader)
    
    with open(metrics_file, "a") as f:
        f.write(json.dumps({
            "step": epoch,
            "train_loss": avg_loss,
            "val_accuracy": val_acc,
            "gpu_memory_mb": torch.cuda.max_memory_allocated() / 1e6,
        }) + "\n")
```

#### Viewing Logs & Metrics

- **Dashboard**: Click any job → **Logs** tab shows streaming output, **Metrics** tab shows interactive charts
- **API**: Logs and metrics are stored in `job_logs` and `job_metrics` tables, queryable via the Supabase client

### Instance Types

```python
# List all GPU types with pricing (EUR default, Verda default)
gpus = client.list_instance_types()

# List in USD
gpus = client.list_instance_types(currency="usd")

# List Scaleway GPU types
gpus = client.list_instance_types(provider="scaleway")
```

### Availability

```python
# Check all availability (Verda)
available = client.check_availability()

# Check specific instance type
available = client.check_availability(instance_type="1V100.6V")

# Check Scaleway availability
available = client.check_availability(provider="scaleway")
available = client.check_availability(instance_type="H100-1-80G", provider="scaleway")
```

### Locations

```python
# Verda locations
locations = client.list_locations()

# Scaleway locations
locations = client.list_locations(provider="scaleway")
```

### OS Images

```python
# All images (Verda)
images = client.list_images()

# Images compatible with a specific GPU type
images = client.list_images(instance_type="1V100.6V")

# Scaleway images
images = client.list_images(provider="scaleway")
```

### Providers

```python
providers = client.list_providers()
# Returns: [{"name": "Verda", "slug": "verda"}, {"name": "Scaleway", "slug": "scaleway"}]
```

### SSH Keys

SSH keys are synced to all active providers automatically when created via the API.

```python
# List keys
keys = client.list_ssh_keys()

# Add a key (synced to Verda + Scaleway)
key = client.add_ssh_key("my-laptop", open("~/.ssh/id_ed25519.pub").read())

# Delete a key (removed from all providers)
client.delete_ssh_key("key-uuid")
```

### Container Registry Credentials

Store credentials for private container registries (Docker Hub, AWS ECR, GHCR, etc.). When you submit a job with a `container_image` and `registry_credential_id`, the platform automatically runs `docker login` + `docker pull` before your startup script.

```python
# List saved credentials
creds = client.list_registry_credentials()

# Add a generic credential (Docker Hub, GHCR, etc.)
cred = client.add_registry_credential(
    name="Docker Hub",
    registry_url="docker.io",
    username="myuser",
    password="dckr_pat_xxxxxxxxxxxx",
    is_default=True,
)

# Add an AWS ECR credential
# Your IAM credentials are stored securely. At job dispatch time, the platform
# exchanges them for a short-lived ECR token (valid 12h) server-side.
# Your AWS credentials never touch the compute node.
ecr_cred = client.add_registry_credential(
    name="My ECR",
    registry_url="123456789.dkr.ecr.eu-west-1.amazonaws.com",
    username="AKIAIOSFODNN7EXAMPLE",        # AWS Access Key ID
    password="wJalrXUtnFEMI/K7MDENG/bPx",   # AWS Secret Access Key
    registry_type="aws_ecr",
    aws_region="eu-west-1",
)

# Delete a credential
client.delete_registry_credential("credential-uuid")

# Submit a job with a private container image
job = client.submit_job(
    name="bio-pipeline",
    instance_type="1V100.6V",
    image="ubuntu-22.04-cuda-12.0",
    location="FIN-01",
    ssh_key_ids=["key-uuid"],
    container_image="ghcr.io/myorg/pipeline:v2",
    registry_credential_id=cred["id"],
    startup_script="docker run ghcr.io/myorg/pipeline:v2 --data /mnt/data",
    auto_shutdown=True,
)
```

## CLI

For terminal usage, see the [CLI Guide](/cli-guide).

## Error Handling

The SDK raises typed exceptions for API errors:

```python
from mimiry import MimiryClient, AuthenticationError, InsufficientCreditsError

client = MimiryClient(api_key="mky_...")

try:
    job = client.submit_job(...)
except AuthenticationError:
    print("Invalid API key")
except InsufficientCreditsError:
    print("Not enough credits — top up at the dashboard")
except MimiryError as e:
    print(f"API error [{e.status_code}]: {e.message}")
```

| Exception | HTTP Status | Meaning |
|-----------|-------------|---------|
| `AuthenticationError` | 401 | Invalid or missing API key |
| `InsufficientCreditsError` | 402 | Not enough credits |
| `InsufficientScopeError` | 403 | API key lacks required scope |
| `NotFoundError` | 404 | Resource not found |
| `RateLimitError` | 429 | Too many requests |
| `ServerError` | 5xx | Server-side error |
| `MimiryError` | other | Catch-all base exception |

## Context Manager

The client can be used as a context manager to ensure connections are closed:

```python
with MimiryClient(api_key="mky_...") as client:
    jobs = client.list_jobs()
```

## Configuration

```python
client = MimiryClient(
    api_key="mky_...",
    base_url="https://custom-endpoint.example.com",  # override API URL
    timeout=60.0,       # request timeout in seconds (default 30)
    max_retries=5,      # retry count for transient failures (default 3)
)
```

## Requirements

- Python ≥ 3.8
- `httpx` ≥ 0.24.0

## License

MIT
