Metadata-Version: 2.4
Name: ai-agent-proxy
Version: 2.1.0
Summary: Mailbox-backed AI agent proxy server
Requires-Python: >=3.10
Description-Content-Type: text/markdown
Requires-Dist: fastapi<1,>=0.115
Requires-Dist: pydantic<3,>=2.7
Requires-Dist: uvicorn<1,>=0.30

# ai-agent-proxy

`ai-agent-proxy` is a small HTTP service that drops incoming requests into a local inbox for your agent to process.

It is designed for a local workspace flow:

- the API receives a request
- the request JSON is written into the agent inbox
- the local worker wakes up and handles it

## Features

- responsive, living agent sessions for replies, such as Codex-based workflows
- more responsive incoming-message handling for OpenClaw, especially while the agent is already busy with earlier work
- the agent can notice later messages before replying to the original one, when time permits
- batch processing for the same channel, so multiple incoming messages can be handled together and answered more naturally like a human
- context rewrite flow for better caching and more stable long-running context
- OpenClaw CLI support for multiple replies and long-duration work
- pure local CLI-based runtime, so you can use the agent CLI you prefer

## Yes / No

Yes:

- this package is for the OpenClaw OpenAI-compatible backend API flow
- it accepts OpenAI-style chat requests and turns them into local inbox work for the agent, which helps OpenClaw handle incoming messages more responsively
- when new messages arrive while the agent is already working, the agent can learn about the later messages before replying to the original one, if time permits
- the agent is expected to use tools and skills to send the real reply outward

No:

- this is not a general OpenAI endpoint replacement
- this endpoint does not return the final assistant result in the HTTP response
- it is not intended for clients that expect normal synchronous OpenAI chat completion behavior

## Agent CLI

`AGENT_INIT_CLI` and `AGENT_CLI` let you choose how the local agent is launched.

- `AGENT_INIT_CLI` is the command used for the worker init step. This session loads the workspace, reads the first-level Markdown files, and builds initial context.
- `AGENT_CLI` is the command used for the normal inbox work loop. This session wakes up for new inbox files, works on requests, and sends replies.
- You can keep both values similar if you want the same agent behavior for init and normal work.
- You can set them differently if you want one command for startup and another for ongoing work.

Example:

- `AGENT_INIT_CLI="/usr/bin/codex exec --skip-git-repo-check --dangerously-bypass-approvals-and-sandbox"` starts a fresh Codex run to prepare context.
- `AGENT_CLI="/usr/bin/codex resume --last --dangerously-bypass-approvals-and-sandbox"` resumes the latest Codex session for inbox processing.

## Install

```bash
pip install ai-agent-proxy
```

## Config

The default config file path is `./.ai-agent-proxy.conf`.

Current sample config:

```env
HOST=0.0.0.0
PORT=7011
WORKSPACE=~/.openclaw/workspace
API_KEY=aibot_<your-key-something>
AGENT_INIT_CLI="/usr/bin/codex exec --skip-git-repo-check --dangerously-bypass-approvals-and-sandbox"
AGENT_CLI="/usr/bin/codex resume --last --dangerously-bypass-approvals-and-sandbox"
```

Notes:

- `WORKSPACE` is the main workspace path.
- `WORKDIR` is still accepted as a compatibility alias, but `WORKSPACE` is preferred.
- `API_KEY` protects the HTTP API with `Authorization: Bearer <api-key>`.

## API

This is an enqueue-only, quick-release API design intended to improve OpenClaw message handling throughput and give the agent room for batch processing.

- `POST /v1/chat/completions`
- `POST /v1/responses`
- `GET /v1/models`
- `GET /v1/models/{model_id}`

For request endpoints, the server writes the request body into the local inbox and returns an immediate empty reply to release the HTTP connection. If `stream: true` is used, the response is returned as SSE, but the request is still enqueue-only.

## How It Works

The agent state lives under:

```text
<workspace>/.ai-agent-proxy
```

Important paths:

- inbox: `<workspace>/.ai-agent-proxy/inbox`
- logs: `<workspace>/.ai-agent-proxy/logs`
- lock file: `<workspace>/.ai-agent-proxy/LOCK`

Inbox files are raw JSON only. There is no extra mailbox wrapper.

When a request is enqueued, the service starts the local worker. The worker reads inbox files, follows the message instructions, and sends outward replies through the configured agent flow.

## Skills

Project skills live in `ai_agent_proxy/skills/`.

They describe:

- the proxy workflow
- outbound message delivery
- context handling

## Logs

Useful server log lines:

- `mailbox_request enqueue ... inbox_root=...`
- `mailbox_request put request_id=... into inbox_path=...`
- `chat_completion invalid_json ...`

Worker output is written to:

```text
<workspace>/.ai-agent-proxy/logs/YYYY-MM-DD.log
```
