Metadata-Version: 2.4
Name: efference
Version: 0.1.4
Summary: Official Python SDK for Efference ML API - Process videos with GPU-accelerated machine learning models
Author-email: EfferenceAI <support@efference.ai>
License: MIT
Project-URL: Homepage, https://efference.ai
Project-URL: Documentation, https://docs.efference.ai
Project-URL: Repository, https://github.com/EfferenceAI/efference
Project-URL: Issues, https://github.com/EfferenceAI/efference/issues
Keywords: efference,ml,machine-learning,gpu,video-processing,inference,api,sdk,depth-estimation,computer-vision
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Requires-Dist: httpx>=0.25.0
Requires-Dist: pillow>=10.0.0
Provides-Extra: visualization
Requires-Dist: matplotlib>=3.5.0; extra == "visualization"
Provides-Extra: dev
Requires-Dist: pytest>=7.0.0; extra == "dev"
Requires-Dist: pytest-cov>=4.0.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
Requires-Dist: black>=23.0.0; extra == "dev"
Requires-Dist: ruff>=0.1.0; extra == "dev"
Requires-Dist: mypy>=1.0.0; extra == "dev"
Requires-Dist: build>=1.0.0; extra == "dev"
Requires-Dist: twine>=4.0.0; extra == "dev"
Dynamic: requires-python

# Efference Python SDK

Official Python client for the **Efference ML API** - advanced 3D vision inference platform with GPU-accelerated depth estimation and correction.

## Table of Contents

- [Installation](#installation)
- [Quick Start](#quick-start)
- [Authentication](#authentication)
- [API Reference](#api-reference)
- [Examples](#examples)
- [Error Handling](#error-handling)
- [Advanced Usage](#advanced-usage)

## Installation

### From PyPI 

```bash
pip install efference
```


## Quick Start

### Basic Video Processing

```python
from efference import EfferenceClient

client = EfferenceClient(api_key="sk_live_your_api_key")

result = client.videos.process("path/to/video.mp4")
print(f"Status: {result['status']}")
print(f"Credits deducted: {result['credits_deducted']}")
```

### Basic Image Processing

```python
result = client.images.process_rgbd(
    "color.png",
    "depth.png",
    save_visualization="depth_colored.png"
)

print(f"Depth range: {result['inference_result']['output']['min']:.2f}m - {result['inference_result']['output']['max']:.2f}m")
```

## Authentication

### Set Your API Key

The SDK reads your API key from the `EFFERENCE_API_URL` environment variable or accepts it directly:

```python
import os

os.environ["EFFERENCE_API_URL"] = "https://api.efference.ai"

client = EfferenceClient(api_key="sk_live_your_key")
```

### Custom Endpoint (Testing)

```python
client = EfferenceClient(
    api_key="sk_test_your_key",
    base_url="http://localhost:8000"
)
```

## API Reference

### EfferenceClient

Main client class for interacting with the Efference API.

#### Initialization

```python
client = EfferenceClient(api_key: str, timeout: Optional[float] = None)
```

**Parameters:**

| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| `api_key` | str | Your API key (sk_live_* or sk_test_*) | Required |
| `timeout` | float | Request timeout in seconds | 300 |

**Raises:**
- `ValueError`: If api_key is empty

---

### Videos Namespace: `client.videos`

#### Process Single Frame

```python
result = client.videos.process(
    file_path: str | Path | file-like,
    model: str = "rgbd",
    content_type: str = None
) -> dict
```

Process a video file through the ML model.

**Parameters:**

| Parameter | Type | Description |
|-----------|------|-------------|
| `file_path` | str, Path, or file-like | Path to video file |
| `model` | str | Model variant (default: "rgbd") |
| `content_type` | str | MIME type (auto-detected if None) |

**Returns:** Dictionary with inference results

**Example:**

```python
result = client.videos.process("video.mp4")
print(result["inference_result"])
print(f"Credits used: {result['credits_deducted']}")
```

#### Process All Frames (Batch)

```python
result = client.videos.process_batch(
    file_path: str | Path | file-like,
    max_frames: int = None,
    frame_skip: int = 1,
    content_type: str = None
) -> dict
```

Process all or multiple frames from a video file.

**Parameters:**

| Parameter | Type | Description |
|-----------|------|-------------|
| `file_path` | str, Path, file-like | Path to video file |
| `max_frames` | int | Max frames to process (None = all) |
| `frame_skip` | int | Process every Nth frame |
| `content_type` | str | MIME type (auto-detected if None) |

**Example:**

```python
result = client.videos.process_batch(
    "video.mp4",
    max_frames=100,
    frame_skip=2
)
print(f"Processed {result['frames_processed']} frames")
print(f"Credits used: {result['credits_deducted']}")
```

**Response structure (example):**

```json
{
  "status": "success",
  "filename": "8134891-uhd_2160_4096_25fps.mp4",
  "file_size_bytes": 18040017,
  "model_name": "d435",
  "video_metadata": {
    "fps": 25.0,
    "frame_count": 660,
    "width": 1440,
    "height": 2732,
    "extracted_frames": 50
  },
  "frames_processed": 50,
  "frame_skip": 1,
  "batch_results": [
    {
      "frame_index": 0,
      "inference_result": {
        "model_type": "rgbd",
        "output": {
          "shape": [518, 518],
          "dtype": "float16",
          "min": 2.2734375,
          "max": 19.828125,
          "mean": 6.35546875,
          "has_valid_depth": true
        }
      }
    }
    // ... 49 more frame entries ...
  ],
  "processing_summary": {
    "total_frames_in_video": 660,
    "frames_extracted": 50,
    "frames_processed": 50
  },
  "credits_deducted": 15.602150440216064,
  "credits_remaining": true,
  "billing_info": {
    "base_cost": 2.0,
    "frame_cost": 5.0,
    "size_cost": 8.602150440216064,
    "total": 15.602150440216064
  }
}
```

Note: fields under `batch_results[*].inference_result.output` report depth statistics for each processed frame. The exact numbers depend on the input video and model configuration.

---

### Images Namespace: `client.images`

#### Process RGBD Image

```python
result = client.images.process_rgbd(
    rgb_path: str | Path | file-like,
    depth_path: str | Path | file-like = None,
    depth_scale: float = 1000.0,
    input_size: int = 518,
    max_depth: float = 25.0,
    save_visualization: str | Path = None,
    save_3panel: str | Path = None
) -> dict
```

Process RGB image with optional depth for depth estimation/correction.

**Parameters:**

| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| `rgb_path` | str, Path, file-like | Path to RGB image | Required |
| `depth_path` | str, Path, file-like | Optional depth image | None |
| `depth_scale` | float | Depth sensor scale factor | 1000.0 |
| `input_size` | int | Model input resolution | 518 |
| `max_depth` | float | Max depth for visualization | 25.0 |
| `save_visualization` | str, Path | Save colorized depth PNG | None |
| `save_3panel` | str, Path | Save comparison PNG | None |

**Example:**

```python
result = client.images.process_rgbd(
    "color.png",
    "depth_raw.png",
    depth_scale=1000.0,
    save_visualization="depth_colored.png",
    save_3panel="comparison.png"
)

print(f"Status: {result['status']}")
print(f"Depth range: {result['inference_result']['output']['min']:.2f}m - {result['inference_result']['output']['max']:.2f}m")
```

#### Visualize Depth Results

```python
fig = client.images.visualize_depth(
    result: dict,
    mode: str = "single",
    show: bool = True
) -> matplotlib.figure.Figure
```

Display depth visualization using matplotlib.

**Parameters:**

| Parameter | Type | Description |
|-----------|------|-------------|
| `result` | dict | API response from process_rgbd() |
| `mode` | str | "single" or "3panel" |
| `show` | bool | Display immediately |

**Example:**

```python
result = client.images.process_rgbd("color.png", "depth.png")
fig = client.images.visualize_depth(result, mode="3panel")
```

---

### Streaming Namespace: `client.streaming`

#### Start Camera Stream

```python
result = client.streaming.start(camera_type: str = "realsense") -> dict
```

**Example:**

```python
result = client.streaming.start("realsense")
print(f"Status: {result['status']}")
```

#### Get Frame from Stream

```python
frame = client.streaming.get_frame(run_inference: bool = False) -> dict
```

**Example:**

```python
frame = client.streaming.get_frame(run_inference=True)
print(f"Frame #{frame['frame_data']['frame_count']}")
```

#### Stop Camera Stream

```python
result = client.streaming.stop() -> dict
```

#### Get Stream Status

```python
status = client.streaming.status() -> dict
```

---

### Models Namespace: `client.models`

#### Switch Model

```python
result = client.models.switch(model_name: str) -> dict
```

**Example:**

```python
result = client.models.switch("d405")
print(f"Active model: {result['current_model']}")
```

#### List Available Models

```python
models = client.models.list() -> dict
```

**Example:**

```python
models = client.models.list()
print(f"Available: {models['available_models']}")
```

---

## Examples

### Example 1: Simple Video Processing

```python
from efference import EfferenceClient

client = EfferenceClient(api_key="sk_live_your_key")

result = client.videos.process("test_video.mp4")

print(f"Status: {result['status']}")
print(f"File size: {result['file_size_bytes'] / 1e6:.2f}MB")
print(f"Credits deducted: {result['credits_deducted']:.2f}")
print(f"Credits remaining: {result['credits_remaining']:.2f}")
```

### Example 2: Batch Video Processing

```python
result = client.videos.process_batch(
    "long_video.mp4",
    max_frames=50,
    frame_skip=1
)

print(f"Processed {result['frames_processed']} frames")

for idx, frame_result in enumerate(result['batch_results']):
    print(f"Frame {idx}: {frame_result['inference_result']['output']}")
```

### Example 3: Image Depth Estimation with Visualization

```python
result = client.images.process_rgbd(
    "input/rgb.png",
    "input/depth.png",
    save_visualization="output/depth.png",
    save_3panel="output/comparison.png"
)

client.images.visualize_depth(result, mode="3panel", show=True)
```

### Example 4: Custom Depth Parameters

```python
result = client.images.process_rgbd(
    "color.png",
    depth_path="depth_raw.png",
    depth_scale=1000.0,
    input_size=518,
    max_depth=30.0
)

output = result['inference_result']['output']
print(f"Depth range: {output['min']:.2f}m - {output['max']:.2f}m")
print(f"Mean depth: {output['mean']:.2f}m")
```

### Example 5: Error Handling

```python
import httpx
from efference import EfferenceClient

client = EfferenceClient(api_key="sk_live_your_key")

try:
    result = client.videos.process("video.mp4")
except FileNotFoundError as e:
    print(f"File not found: {e}")
except httpx.HTTPStatusError as e:
    if e.response.status_code == 401:
        print("Authentication failed. Check your API key.")
    elif e.response.status_code == 402:
        print("Insufficient credits.")
    elif e.response.status_code == 413:
        print("File too large.")
    else:
        print(f"HTTP error: {e.response.status_code}")
except httpx.TimeoutException:
    print("Request timed out.")
except httpx.RequestError as e:
    print(f"Connection error: {e}")
```

## Error Handling

### Common Errors and Solutions

| Error | Cause | Solution |
|-------|-------|----------|
| 401 Unauthorized | Invalid API key | Verify key starts with sk_live_ or sk_test_ |
| 402 Payment Required | Insufficient credits | Purchase additional credits |
| 413 Payload Too Large | Video exceeds 500MB | Split into smaller files |
| 504 Gateway Timeout | Processing took too long | Increase timeout or reduce input size |
| 429 Too Many Requests | Rate limited | Implement exponential backoff |
| 500 Internal Server | Server error | Retry request after delay |

## Advanced Usage

### Custom Endpoint

```python
client = EfferenceClient(
    api_key="sk_live_your_key",
    base_url="http://your-server.local:8000"
)
```

### Custom Timeout

```python
client = EfferenceClient(
    api_key="sk_live_your_key",
    timeout=600.0  # 10 minutes
)
```

### File-like Objects

```python
import io

video_bytes = open("video.mp4", "rb").read()
video_io = io.BytesIO(video_bytes)

result = client.videos.process(video_io, content_type="video/mp4")
```

### Retry Logic

```python
import time

def process_with_retry(client, video_path, max_retries=3):
    for attempt in range(max_retries):
        try:
            return client.videos.process(video_path)
        except Exception as e:
            if attempt == max_retries - 1:
                raise
            wait_time = 2 ** attempt
            print(f"Attempt {attempt + 1} failed. Retrying in {wait_time}s...")
            time.sleep(wait_time)
```

---

## Support and Resources

- **Documentation**: https://docs.efference.ai
- **API Status**: https://status.efference.ai
- **GitHub Issues**: https://github.com/EfferenceAI/efference/issues
- **Email**: support@efference.ai

## License

MIT License - See LICENSE file for details
