Metadata-Version: 2.4
Name: zai-adk-python-preview
Version: 0.1.1
Summary: A Python SDK for building AI agents with LLM integration
Project-URL: Homepage, https://github.com/zai-adk-python/zai-adk-python
Project-URL: Documentation, https://github.com/zai-adk-python/zai-adk-python#readme
Project-URL: Repository, https://github.com/zai-adk-python/zai-adk-python
Project-URL: Issues, https://github.com/zai-adk-python/zai-adk-python/issues
Author: zai-adk-python contributors
License: MIT
License-File: LICENSE
Keywords: agent,ai,anthropic,llm,mcp,model-context-protocol,openai,sandbox,skills,tool-calling
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.11
Requires-Dist: aiohttp>=3.9.0
Requires-Dist: anthropic>=0.40.0
Requires-Dist: anyio>=4.12.1
Requires-Dist: httpx[socks]>=0.28.1
Requires-Dist: mcp>=1.26.0
Requires-Dist: openai>=1.50.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: python-dotenv>=0.9.9
Provides-Extra: all
Requires-Dist: anthropic>=0.40.0; extra == 'all'
Requires-Dist: fastapi>=0.115.0; extra == 'all'
Requires-Dist: lmnr>=0.4.0; extra == 'all'
Requires-Dist: openai>=1.50.0; extra == 'all'
Requires-Dist: uvicorn[standard]>=0.32.0; extra == 'all'
Provides-Extra: anthropic
Requires-Dist: anthropic>=0.40.0; extra == 'anthropic'
Provides-Extra: observability
Requires-Dist: lmnr>=0.4.0; extra == 'observability'
Provides-Extra: openai
Requires-Dist: openai>=1.50.0; extra == 'openai'
Provides-Extra: server
Requires-Dist: fastapi>=0.115.0; extra == 'server'
Requires-Dist: uvicorn[standard]>=0.32.0; extra == 'server'
Provides-Extra: test
Requires-Dist: pytest-asyncio>=0.23.0; extra == 'test'
Requires-Dist: pytest>=8.0.0; extra == 'test'
Description-Content-Type: text/markdown

# zai-adk-python

English | [简体中文](README_zh.md)

A Python SDK for building AI agents with LLM integration. Provides a unified interface for multiple LLM providers (OpenAI, Anthropic) with native tool calling, sandboxed execution, skills system, and observability.

## Features

- **Unified LLM Interface** — Single API for OpenAI and Anthropic with provider-specific optimizations
- **Tool System** — FastAPI-style dependency injection with automatic JSON schema generation
- **Sandboxed Execution** — Secure filesystem and command isolation for agent tools
- **Skills System** — Reusable prompt instructions with hot-reload support
- **MCP Integration** — First-class support for Model Context Protocol servers
- **Event Streaming** — Real-time agent events (thinking, tool calls, responses)
- **Token Tracking** — Built-in cost calculation and usage monitoring
- **Observability** — Laminar integration for tracing and debugging

## Requirements

- Python >= 3.11
- [uv](https://github.com/astral-sh/uv) for dependency management

## Installation

```bash
# Clone the repository
git clone <repo-url>
cd zai-adk-python

# Install dependencies
uv sync --group dev
```

## Configuration

Create a `.env` file in the project root:

```bash
# Anthropic
ANTHROPIC_BASE_URL=https://open.bigmodel.cn/api/anthropic
ANTHROPIC_API_KEY=your-api-key
ANTHROPIC_MODEL=glm-4.7

# OpenAI (optional)
OPENAI_BASE_URL=https://open.bigmodel.cn/api/paas/v4/
OPENAI_API_KEY=your-api-key
OPENAI_MODEL=glm-4.7
```

## Quick Start

### Basic Agent

```python
import asyncio
from zai_adk import Agent
from zai_adk.llm.anthropic.chat import ChatAnthropic
from zai_adk.tools import tool

@tool("Calculate the sum of two numbers")
async def add(a: int, b: int) -> int:
    """Add two numbers together."""
    return a + b

async def main():
    agent = Agent(
        llm=ChatAnthropic(model="claude-sonnet-4-5-20250929"),
        tools=[add],
    )

    response = await agent.query("What is 25 + 17?")
    print(response)

asyncio.run(main())
```

### Streaming Events

```python
from zai_adk.agent import ToolCallEvent, ToolResultEvent, FinalResponseEvent

async for event in agent.query_stream("Help me with a task"):
    match event:
        case ToolCallEvent(tool=name, args=args):
            print(f"[Calling {name}]")
        case ToolResultEvent(tool=name, result=result):
            print(f"[Result from {name}]: {result}")
        case FinalResponseEvent(content=text):
            print(f"[Final]: {text}")
```

### Sandboxed File Operations

```python
from pathlib import Path
from zai_adk.sandbox import SandboxConfig
from zai_adk.tools.builtin.sandbox import bash, fs_read, fs_write

agent = Agent(
    llm=ChatAnthropic(model="claude-sonnet-4-5-20250929"),
    tools=[bash, fs_read, fs_write],
    sandbox=SandboxConfig(
        kind="local",
        work_dir="./workspace",
        enforce_boundary=True,
    ),
)
```

### Skills System

```python
from zai_adk.skills import Skill, SkillsManager
from zai_adk.tools.builtin import skills

skills_manager = SkillsManager()
skills_manager.register(
    Skill(
        name="code-review",
        description="Review code for bugs and security issues",
        instructions="""You are a code reviewer...""",
    )
)

agent = Agent(
    llm=ChatAnthropic(model="claude-sonnet-4-5-20250929"),
    tools=[skills],
    skills_manager=skills_manager,
)
```

Or use filesystem-based skills with auto-discovery:

```python
agent = Agent(
    llm=ChatAnthropic(model="claude-sonnet-4-5-20250929"),
    tools=[skills],
    skills_dir="./skills",  # Scans for SKILL.md files
)
```

## Tool Creation

Use the `@tool` decorator with type hints for automatic schema generation:

```python
from typing import Annotated
from zai_adk.tools import tool, Depends

# Simple tool
@tool("Get current weather")
async def get_weather(location: str, unit: str = "celsius") -> str:
    """Get the weather for a location."""
    return f"Weather in {location}: 22° {unit}"

# With dependency injection
@tool("Search the database")
async def search_db(
    query: str,
    db_conn: Annotated[DbConnection, Depends(get_db)],
) -> list[dict]:
    """Search the database for matching records."""
    return await db_conn.execute(query)
```

### Dependency Injection

```python
from zai_adk.sandbox import get_sandbox
from zai_adk.sandbox.base import Sandbox

@tool("Execute a shell command")
async def bash(
    command: str,
    sandbox: Annotated[Sandbox, Depends(get_sandbox)],
) -> str:
    """Run a command in the sandboxed environment."""
    result = await sandbox.exec(command)
    return result.stdout
```

### Ephemeral Tools

Keep only the last N tool outputs in message history:

```python
@tool("Read a file", ephemeral=1)
async def read_file(path: str) -> str:
    """Only the most recent result is kept in context."""
    return Path(path).read_text()
```

## Agent Modes

### CLI Mode (Default)

Agent stops automatically when the LLM returns text without tool calls:

```python
agent = Agent(llm=..., tools=[...], mode="cli")
response = await agent.query("List all Python files")
```

### Autonomous Mode

Agent requires an explicit `done` tool call to signal completion:

```python
@tool("Signal task completion")
async def done(message: str) -> str:
    return f"TASK COMPLETE: {message}"

agent = Agent(llm=..., tools=[..., done], mode="autonomous")
```

## MCP Integration

```python
from zai_adk.mcp import MCPServerConfig

agent = Agent(
    llm=ChatAnthropic(model="claude-sonnet-4-5-20250929"),
    mcp_servers=[
        MCPServerConfig(
            name="filesystem",
            command="npx",
            args=["-y", "@modelcontextprotocol/server-filesystem", "./workspace"],
        ),
    ],
)
```

## Protocol Support

The SDK supports multiple output protocols through a composable interface:

### AG-UI Protocol

```python
from zai_adk import Agent
from zai_adk.protocols.agui import AGUIProtocol

agent = Agent(llm=llm, tools=[tools])
agui = AGUIProtocol(agent)

async for event in agui.query_stream("Hello"):
    # Handle AG-UI events
    ...
```

### SSE Streaming for HTTP

```python
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
from zai_adk import Agent
from zai_adk.protocols.agui import AGUIProtocol

app = FastAPI()
agent = Agent(llm=llm, tools=[tools])
agui = AGUIProtocol(agent)

@app.post("/chat")
async def chat(message: str):
    async def stream():
        async for sse in agui.query_stream_sse(message):
            yield sse

    return StreamingResponse(stream(), media_type="text/event-stream")
```

See [docs/protocols.md](docs/protocols.md) for more details on implementing custom protocols.

## Testing

```bash
# Run all tests
uv run pytest tests/ -v

# Skip tests requiring real LLM API calls
uv run pytest tests/ -v -m "not llm"

# Run specific test
uv run pytest tests/llm/test_anthropic_chat.py::test_anthropic_basic_chat -v
```

## Examples

| Example | Description |
|---------|-------------|
| `claude_code.py` | Claude Code-style tools with sandboxed filesystem |
| `skills_usage.py` | Skills system patterns and hot-reload |
| `mcp_usage.py` | Model Context Protocol integration |
| `todo_usage.py` | Todo list management example |

Run an example:
```bash
python -m examples.claude_code
```

## Architecture

```
zai_adk/
├── agent/          # Agent loop with event streaming
├── llm/            # LLM provider abstraction (OpenAI, Anthropic)
├── tools/          # Tool system with dependency injection
├── sandbox/        # Secure execution environments
├── skills/         # Reusable prompt instructions
├── mcp/            # Model Context Protocol integration
├── protocols/      # Protocol converter base interface
│   └── agui/       # AG-UI protocol implementation
├── tokens/         # Usage tracking and cost calculation
└── observability/  # Laminar tracing integration
```

## License

MIT License
