Metadata-Version: 2.4
Name: cmdop-llm
Version: 0.1.0
Summary: Python SDK for CMDOP LLM Service - OpenAI-compatible API for 200+ AI models
Project-URL: Homepage, https://cmdop.com
Project-URL: Documentation, https://sdk.cmdop.com
Project-URL: Repository, https://github.com/markolofsen/cmdop-client
Project-URL: Issues, https://github.com/markolofsen/cmdop-client/issues
Author-email: CMDOP Team <support@cmdop.com>
License-Expression: MIT
License-File: LICENSE
Keywords: ai,anthropic,api,chat,claude,cmdop,gpt,llm,openai,openrouter
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Typing :: Typed
Requires-Python: >=3.9
Requires-Dist: httpx<1,>=0.23.0
Requires-Dist: openai<3.0.0,>=1.0.0
Requires-Dist: pydantic<3,>=2.0.0
Requires-Dist: typing-extensions>=4.5.0
Provides-Extra: dev
Requires-Dist: mypy>=1.0.0; extra == 'dev'
Requires-Dist: pyright>=1.1.300; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.21.0; extra == 'dev'
Requires-Dist: pytest>=7.0.0; extra == 'dev'
Requires-Dist: respx>=0.20.0; extra == 'dev'
Requires-Dist: ruff>=0.1.0; extra == 'dev'
Description-Content-Type: text/markdown

# CMDOP LLM Python SDK

Python SDK for CMDOP LLM Service - OpenAI-compatible API for 200+ AI models.

## Installation

```bash
pip install cmdop-llm
```

## Quick Start

```python
from cmdop_llm import CmdopLLM

client = CmdopLLM(api_key="your-api-key")

response = client.chat.completions.create(
    model="anthropic/claude-3.5-sonnet",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
```

## Features

- **Drop-in OpenAI replacement** - Same API, different models
- **200+ Models** - GPT-4, Claude, Llama, Mistral, Gemini via single endpoint
- **Streaming** - Real-time token streaming
- **Vision & OCR** - Image analysis and text extraction
- **Image Generation** - FLUX, DALL-E and other models
- **Async Support** - Full async/await support

## Environment Variables

```bash
export CMDOP_API_KEY="your-api-key"
export CMDOP_BASE_URL="https://llm.cmdop.com"  # Optional, default
```

## Usage Examples

### Chat Completion

```python
from cmdop_llm import CmdopLLM

client = CmdopLLM()

response = client.chat.completions.create(
    model="openai/gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain quantum computing."}
    ],
    temperature=0.7,
    max_tokens=1000,
)
print(response.choices[0].message.content)
```

### Streaming

```python
stream = client.chat.completions.create(
    model="anthropic/claude-3.5-sonnet",
    messages=[{"role": "user", "content": "Write a poem."}],
    stream=True,
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")
```

### Vision Analysis

```python
result = client.vision.analyze(
    image_url="https://example.com/image.jpg",
    prompt="Describe this image"
)
print(result.description)
print(result.extracted_text)
```

### OCR Text Extraction

```python
result = client.ocr.extract(
    image_url="https://example.com/document.png"
)
print(result.text)
```

### Image Generation

```python
response = client.images.generate(
    model="black-forest-labs/FLUX.1-schnell",
    prompt="A futuristic cityscape",
    size="1024x1024",
)
print(response.data[0].url)
```

### Async Usage

```python
import asyncio
from cmdop_llm import AsyncCmdopLLM

async def main():
    client = AsyncCmdopLLM()

    response = await client.chat.completions.create(
        model="openai/gpt-4o",
        messages=[{"role": "user", "content": "Hello!"}]
    )
    print(response.choices[0].message.content)

asyncio.run(main())
```

## Available Models

Access 200+ models including:

- **OpenAI**: gpt-4o, gpt-4o-mini, gpt-4-turbo
- **Anthropic**: claude-3.5-sonnet, claude-3-opus, claude-3-haiku
- **Google**: gemini-pro, gemini-1.5-pro
- **Meta**: llama-3.1-405b, llama-3.1-70b
- **Mistral**: mistral-large, mixtral-8x22b
- **Image**: FLUX.1-schnell, FLUX.1-pro, stable-diffusion-xl

Use model format: `provider/model-name` (e.g., `openai/gpt-4o`)

## API Reference

### CmdopLLM

```python
CmdopLLM(
    api_key: str = None,       # From CMDOP_API_KEY env if not set
    base_url: str = None,      # Default: https://llm.cmdop.com
    timeout: float = None,     # Request timeout
    max_retries: int = 2,      # Retry count
)
```

### Resources

- `client.chat.completions` - Chat completions (OpenAI compatible)
- `client.images` - Image generation (OpenAI compatible)
- `client.models` - List available models
- `client.vision` - Vision analysis (CMDOP specific)
- `client.ocr` - OCR extraction (CMDOP specific)

## License

MIT
