Metadata-Version: 2.4
Name: studiolm
Version: 0.1.0
Summary: Python SDK for the StudioLM API – chat completions and image generation
License-Expression: MIT
Project-URL: Homepage, https://studiolm.dev
Project-URL: Documentation, https://docs.studiolm.dev
Project-URL: Repository, https://github.com/studiolm/studiolm-python
Keywords: studiolm,ai,image generation,llm,sdk
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Requires-Dist: httpx>=0.24
Provides-Extra: dev
Requires-Dist: pytest>=7; extra == "dev"
Requires-Dist: pytest-httpx>=0.21; extra == "dev"

# studiolm

Official Python SDK for the [StudioLM API](https://studiolm.dev) — chat completions and image generation.

---

## Installation

```bash
pip install studiolm
```

For local development (from this repo):

```bash
pip install -e .
```

---

## Quick Start

```python
import studiolm

client = studiolm.Client(api_key="sk-...")

# Generate an image and save it
image = client.generate(
    "A serene mountain lake at dawn, cinematic lighting, 8K",
    model="imagen-v3",
    size="1024x1024",
    style="vivid",
)
image.save("masterpiece.png")

# Chat completions
response = client.chat.completions.create(
    model="gemma-3-12b-it-qat",
    messages=[{"role": "user", "content": "What is the capital of France?"}],
)
print(response["choices"][0]["message"]["content"])
```

You can also set the API key via environment variable:

```bash
export STUDIOLM_API_KEY="sk-..."
```

```python
import studiolm

client = studiolm.Client()  # reads STUDIOLM_API_KEY automatically
```

---

## Image Generation

### Parameters

| Parameter | Type | Default | Description |
|---|---|---|---|
| `prompt` | str | required | Text description of the image |
| `model` | str | auto | Model display name (e.g. `"imagen-v3"`) |
| `size` | str | `"1024x1024"` | Output resolution — see size table below |
| `style` | str | `"vivid"` | `preview`, `natural`, `vivid`, `upscaled` |
| `aspect_ratio` | str | derived from size | `square`, `portrait`, `landscape` |
| `negative_prompt` | str | — | Elements to avoid |
| `seed` | int | random | Reproducibility seed |
| `response_format` | str\|list | `"url"` | `url`, `b64_json`, `hxd`, or a list |

### Supported sizes

| Size | Aspect ratio |
|---|---|
| `512x512` | square |
| `768x768` | square |
| `1024x1024` | square |
| `832x1216` | portrait |
| `512x768` | portrait |
| `1216x832` | landscape |
| `768x512` | landscape |

Add custom sizes at runtime:

```python
import studiolm
studiolm.SIZE_PRESETS["640x640"] = "square"
```

### Examples

```python
# Portrait, natural style
image = client.generate(
    "A knight standing in a misty forest",
    model="imagen-v3",
    size="832x1216",
    style="natural",
)
image.save("knight.png")

# Landscape with negative prompt and seed
image = client.generate(
    "Cyberpunk city at night",
    size="1216x832",
    style="vivid",
    negative_prompt="blurry, low quality",
    seed=42,
)
image.save("city.png")

# Image-to-image from URL
image = client.generate(
    "Transform into Studio Ghibli style",
    reference_image_url="https://example.com/photo.jpg",
    denoising_strength=0.65,
)
image.save("ghibli.png")

# Image-to-image from local file
import base64
with open("my_photo.png", "rb") as f:
    b64 = base64.b64encode(f.read()).decode()

image = client.generate(
    "Make it look like a watercolor painting",
    reference_image=f"data:image/png;base64,{b64}",
    denoising_strength=0.5,
)
image.save("watercolor.png")

# Get URL + base64 in one request
image = client.generate(
    "A galaxy nebula",
    response_format=["url", "b64_json"],
)
print(image.url)
image.save("nebula.png")  # uses b64_json for saving
```

### List image models

```python
models = client.images.available_models()
for m in models:
    print(m["display_name"], "-", m["description"])
```

---

## Chat Completions

```python
# Basic
response = client.chat.completions.create(
    model="gemma-3-12b-it-qat",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain quantum entanglement briefly."},
    ],
    temperature=0.7,
    max_tokens=500,
)
print(response["choices"][0]["message"]["content"])

# Streaming
for chunk in client.chat.completions.create(
    model="gemma-3-12b-it-qat",
    messages=[{"role": "user", "content": "Write a haiku about coding"}],
    stream=True,
):
    delta = chunk["choices"][0].get("delta", {})
    if "content" in delta:
        print(delta["content"], end="", flush=True)
print()

# Web search (smart mode)
response = client.chat.completions.create(
    model="gemma-3-27b-it-qat",
    messages=[{"role": "user", "content": "What happened in AI news today?"}],
    web_search="auto",
)

# JSON mode
response = client.chat.completions.create(
    model="gemma-3-12b-it-qat",
    messages=[{"role": "user", "content": "Return a JSON list of 3 fruits."}],
    response_format="json",
)
```

---

## Models

```python
# List available chat/text models
models = client.models.list()
for m in models:
    print(m["id"])

# List image generation models
image_models = client.images.available_models()
```

---

## Context manager

```python
with studiolm.Client(api_key="sk-...") as client:
    image = client.generate("A sunset over the ocean")
    image.save("sunset.png")
```

---

## Custom base URL (self-hosted)

```python
client = studiolm.Client(
    api_key="sk-...",
    base_url="http://localhost:8000",
)
```

Or via environment variable:

```bash
export STUDIOLM_BASE_URL="http://localhost:8000"
```

---

## License

MIT
