Metadata-Version: 2.4
Name: nineth
Version: 0.5.26
Summary:  model sdk built by the 9th ditrict at tooig
Project-URL: Homepage, https://github.com/districtt/rooster
Project-URL: Bug Tracker, https://github.com/districtt/rooster/issues
Author-email: "Tooig, Inc" <tooighq@gmail.com>, Oyebamijo <boy@oyebamijo.com>
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Requires-Python: >=3.10
Requires-Dist: httpx<1.0,>=0.27.2
Description-Content-Type: text/markdown

# nineth

`nineth` is the Python SDK for the 1984 model API, built by the 9th District at Tooig.

---

## Install

```bash
pip install nineth
export NINETH_API_KEY="your-api-key"
```

---

## How it works

Every request goes through `client.model.request(...)`.

- Pass a task. Get a response.
- Set `stream=True` to receive text as it arrives, word by word.
- Per request, you can opt into persistent sessions, built-in services, included services, a base-provider override, and debug telemetry.
- The server still runs the worker loop and manages the actual task state.

---

## Models

| Name | Description |
|---|---|
| `1984-m3-0317` | Most capable. Best for research and complex tasks. |
| `1984-m2-preview` | Fast and powerful. Good for most tasks. |
| `1984-m2-light` | Lightweight, quick general tasks. |
| `1984-m1-unified` | High-throughput unified model. |
| `1984-m0-brute` | Compact efficient model. |
| `1984-m0-sm` | Smallest model, fastest responses. |

Set a default at client creation or pass `model=` per call.

---

## Cookbook

### 1 — Get a response

The simplest case. Ask something, get the answer.

```python
from nineth import NinethClient

with NinethClient(default_model="1984-m3-0317") as client:
    response = client.model.request("Give me a tight BTC market brief.")
    print(response["final_response"])
```

`response` is a plain dict. The text is always in `response["final_response"]`.

---

### 2 — Stream the response live

Set `stream=True` to print text as it arrives.

```python
from nineth import NinethClient

with NinethClient(default_model="1984-m3-0317") as client:
    for event in client.model.request("Summarise crude oil today.", stream=True):
        if event["type"] == "model_delta":
            print(event["data"]["text"], end="", flush=True)
```

The last event in the stream is `type: result` and contains the full `final_response`
alongside `iterations`.

---

### 3 — Choose a different model per request

```python
from nineth import NinethClient

with NinethClient() as client:
    response = client.model.request(
        "What happened with Nvidia earnings?",
        model="1984-m2-light",
    )
    print(response["final_response"])
```

---

### 4 — Control reasoning depth

Use `reasoning` to hint at how deeply the model should think before answering.
Valid values: `"low"`, `"medium"`, `"high"`. Leave it out to use the model default.

```python
from nineth import NinethClient

with NinethClient(default_model="1984-m3-0317") as client:
    response = client.model.request(
        "Analyse the macro impact of a Fed rate pause.",
        reasoning="high",
    )
    print(response["final_response"])
```

---

### 5 — Show the model's reasoning

Set `show_reasoning=True` to include the model's internal chain-of-thought.
This is off by default.

```python
from nineth import NinethClient

with NinethClient(default_model="1984-m3-0317") as client:
    response = client.model.request(
        "Walk me through whether gold is trending or ranging.",
        reasoning="medium",
        show_reasoning=True,
    )
    for block in response.get("thinking", []):
        print("[thinking]", block)
    print(response["final_response"])
```

---

### 6 — Limit how many turns the model takes

`max_iterations` controls how many model turns the server runs.
The default is `10`. Most tasks finish in 1–3 turns.

```python
from nineth import NinethClient

with NinethClient(default_model="1984-m3-0317") as client:
    response = client.model.request(
        "Give me a one-paragraph ETH brief.",
        max_iterations=2,
    )
    print(response["final_response"])
```

---

### 7 — Async usage

```python
import asyncio
from nineth import AsyncNinethClient

async def main():
    async with AsyncNinethClient(default_model="1984-m3-0317") as client:
        response = await client.model.request(
            "Summarise macro risk factors this week.",
        )
        print(response["final_response"])

asyncio.run(main())
```

Async streaming works the same way:

```python
import asyncio
from nineth import AsyncNinethClient

async def main():
    async with AsyncNinethClient(default_model="1984-m3-0317") as client:
        async for event in await client.model.request(
            "Research BTC ETF flows.", stream=True
        ):
            if event["type"] == "model_delta":
                print(event["data"]["text"], end="", flush=True)

asyncio.run(main())
```

---

### 8 — Health check

No API key needed. Use this to verify the endpoint is reachable.

```python
from nineth import NinethClient

with NinethClient() as client:
    print(client.health())
# {'status': 'ok', 'timestamp': '2026-04-04T00:00:00+00:00'}
```

---

### 9 — Provider routing

SDK requests use the base-system provider path by default.
Set `base_system=False` only when you explicitly want to fall back to the runtime default provider selection.

```python
from nineth import NinethClient

with NinethClient(default_model="1984-m3-0421") as client:
    response = client.model.request(
        "Summarise today's macro tape.",
        base_system=False,
    )
    print(response["final_response"])
```

`base_system` is `True` by default.

---

### 11 — Persist and reuse a session

Set `cache=True` to persist the task session and receive a reusable `session_id`.
Pass that `session_id` back into later requests to continue the same task memory.

```python
from nineth import NinethClient

with NinethClient(default_model="1984-m3-0317") as client:
    first = client.model.request(
        "Start a running research notebook for crude oil.",
        cache=True,
    )
    session_id = first["session_id"]

    second = client.model.request(
        "Continue from the last note and add today's macro drivers.",
        cache=True,
        session_id=session_id,
    )
    print(second["final_response"])
```

If `cache=False`, the task is treated as one-shot and no `session_id` is exposed.

---

### 12 — Opt into built-in services

Built-in services are off by default.

- `default_service=False`: no built-in services
- `default_service=True`: all built-in services
- `default_service=[...]`: only the named built-in services

```python
from nineth import NinethClient

with NinethClient(default_model="1984-m3-0317") as client:
    response = client.model.request(
        "Research OPEC headlines and summarise the impact.",
        default_service=["search_news", "search_web"],
    )
    print(response["final_response"])
```

---

### 13 — Use one `include_service` surface for custom services

`include_service` is the only custom-service surface you need.

- If you pass a local `schema.py` path or a directory containing `schema.py`, the SDK loads it locally, exposes its schema to the model, executes the emitted service call in your process, and resumes the run through the callback protocol.
- Buffered requests still auto-execute local callback services in the caller process.
- Streaming requests do not auto-execute local callback services. They pause with an `awaiting_client_services` event so your code can decide how to execute the local work and when to resume.
- If you pass a shorthand string such as a service-manager name, service name, or local service directory name, the SDK scans local `schema.py` files and prefers matches under `services/` directories.
- If you pass a service manager class or instance that comes from a schema module, the SDK resolves that module automatically.
- If you pass an unresolved path string, the SDK passes it through to the API server as a server-hosted `schema.py` path.
- If a local service name collides with an existing server-side service name, the request fails early with a clear error instead of running the wrong service.

This keeps the public surface small while still supporting both local and server-managed services.

Local callback example:

```python
from nineth import NinethClient

with NinethClient(default_model="1984-m3-0317") as client:
    response = client.model.request(
        "Use my local weather service and compare it with today's oil move.",
        default_service=["search_web"],
        include_service=["./services/weather/schema.py"],
    )
    print(response["final_response"])
```

Shorthand discovery example:

```python
from nineth import NinethClient

with NinethClient(default_model="1984-m3-0317") as client:
    response = client.model.request(
        "Use my local weather service.",
        include_service=["WeatherServiceManager"],
    )
    print(response["final_response"])
```

Direct manager reference example:

```python
from nineth import NinethClient
from myapp.schema import WeatherServiceManager

with NinethClient(default_model="1984-m3-0317") as client:
    response = client.model.request(
        "Use my local weather service.",
        include_service=[WeatherServiceManager],
    )
    print(response["final_response"])
```

Manual streaming callback example:

```python
from nineth import NinethClient

with NinethClient(default_model="1984-m3-0317") as client:
    pending_process_id = None

    for event in client.model.request(
        "What is the weather like right now in sf?",
        include_service=["./regulator/schema.py"],
        stream=True,
    ):
        if event["type"] == "model_delta":
            print(event["data"].get("text", ""), end="", flush=True)
        elif event["type"] == "awaiting_client_services":
            pending_process_id = event["process_id"]
            pending_calls = event["data"]["pending_client_calls"]

    if pending_process_id:
        resume = client.model.request(
            "Resume after local weather call.",
            stream=True,
            session_id=pending_process_id,
            client_service_results=[
                {
                    "call_id": pending_calls[0]["call_id"],
                    "service_name": "get_weather",
                    "success": True,
                    "result": {"location": "San Francisco", "forecast": "sunny"},
                }
            ],
        )
        for event in resume:
            if event["type"] == "model_delta":
                print(event["data"].get("text", ""), end="", flush=True)
```

Server-hosted path example:

```python
from nineth import NinethClient

with NinethClient(default_model="1984-m3-0317") as client:
    response = client.model.request(
        "Use the custom weather service and compare it with market risk sentiment.",
        default_service=["search_web"],
        include_service=["/srv/app/services/weather/schema.py"],
    )
    print(response["final_response"])
```

Minimal local `schema.py` shape:

```python
from typing import Any, Dict


class WeatherServiceManager:
    def get_weather(self, location: str) -> Dict[str, Any]:
        return {"location": location, "forecast": "sunny"}


weather_services = {
    "get_weather": {
        "name": "get_weather",
        "description": "Return a tiny weather forecast.",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {"type": "string"}
            },
            "required": ["location"],
            "additionalProperties": False,
        },
    }
}


weather_implementations = {
    "get_weather": lambda manager: manager.get_weather,
}
```

### 14 — Enable debug telemetry

Set `debug=True` to keep the worker's structured telemetry in the buffered response or streamed events.

```python
from nineth import NinethClient

with NinethClient(default_model="1984-m3-0317") as client:
    response = client.model.request(
        "Research market risk and show the internal trace.",
        default_service=["search_news"],
        debug=True,
    )
    print(response.get("events", []))
```

---

## Response shape

### Buffered (`stream=False`)

```python
{
    "final_response": "Bitcoin is trading near...",
    "iterations": 2,
    "usage": {"prompt_tokens": 412, "completion_tokens": 88, "total_tokens": 500},
    "thinking": [],          # only populated when show_reasoning=True
    "service_calls": [...],
    "service_responses": [...],
    "events": [...],
}
```

Only `final_response` and `iterations` are guaranteed to be present on every response.

### Streaming (`stream=True`)

Each loop iteration yields a dict:

```python
# Text arriving live
{"type": "model_delta", "data": {"text": "Bitcoin is trading..."}}

# service calls the model made
{"type": "service_call",     "data": {"service_name": "search_web", "params": {...}}}
{"type": "service_response", "data": {"service_name": "search_web", "success": True, "summary": {...}}}

# Final summary — always the last event
{"type": "result", "data": {"final_response": "...", "iterations": 2}}
```

---

## Error handling

```python
from nineth import NinethClient, NinethAPIError

with NinethClient(default_model="1984-m3-0317") as client:
    try:
        response = client.model.request("Analyse ETH.")
    except NinethAPIError as exc:
        print("API error:", exc)
    except ValueError as exc:
        print("Configuration error:", exc)
```

---

## Authentication

Set `NINETH_API_KEY` in your environment or pass `api_key=` to the client constructor.
That key can be any one key registered by the server's `NINETH_API_KEYS` registry.
The health check endpoint does not require a key.
