Metadata-Version: 2.4
Name: agentbreak
Version: 0.2.0
Summary: Chaos proxy for testing LLM agents. Supports OpenAI, Anthropic, and MCP.
License-Expression: MIT
Project-URL: Homepage, https://github.com/mnvsk97/agentbreak
Project-URL: Repository, https://github.com/mnvsk97/agentbreak
Project-URL: Issues, https://github.com/mnvsk97/agentbreak/issues
Keywords: llm,openai,anthropic,mcp,proxy,chaos-testing,resilience,agents
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Testing
Classifier: Topic :: Internet :: Proxy Servers
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: fastapi>=0.115.0
Requires-Dist: httpx>=0.27.0
Requires-Dist: pydantic>=2.7.0
Requires-Dist: pyyaml>=6.0.0
Requires-Dist: typer>=0.12.0
Requires-Dist: uvicorn>=0.30.0
Provides-Extra: dev
Requires-Dist: pytest>=8.0.0; extra == "dev"
Requires-Dist: pytest-xdist>=3.6.0; extra == "dev"
Requires-Dist: httpx>=0.27.0; extra == "dev"
Dynamic: license-file

# AgentBreak

Chaos proxy for testing how your agents handle failures. Sits between your agent and the LLM/MCP server, injects faults.

```
Agent  -->  AgentBreak (localhost:5005)  -->  Real LLM / MCP server
                     ^
          .agentbreak/scenarios.yaml defines faults
```

## Quick start

```bash
pip install agentbreak
agentbreak init       # creates .agentbreak/ with default configs
agentbreak serve      # start the chaos proxy
```

Point your agent at `http://localhost:5005` instead of the real API:

- OpenAI SDK: set `OPENAI_BASE_URL=http://localhost:5005/v1`
- Anthropic SDK: set `ANTHROPIC_BASE_URL=http://localhost:5005`

Check results:

```bash
curl localhost:5005/_agentbreak/scorecard
```

## Config

**`.agentbreak/application.yaml`** -- what to proxy:

```yaml
llm:
  enabled: true
  mode: mock           # mock (no API key needed) or proxy (forwards to upstream)
mcp:
  enabled: false       # set true + upstream_url for MCP testing
serve:
  port: 5005
```

**`.agentbreak/scenarios.yaml`** -- what faults to inject:

```yaml
version: 1
scenarios:
  - name: slow-llm
    summary: Latency spike on completions
    target: llm_chat
    fault:
      kind: latency
      min_ms: 2000
      max_ms: 5000
    schedule:
      mode: random
      probability: 0.3
```

Or use a preset: `brownout`, `mcp-slow-tools`, `mcp-tool-failures`, `mcp-mixed-transient`.

## Fault kinds

`http_error`, `latency`, `timeout` (MCP only), `empty_response`, `invalid_json`, `schema_violation`, `wrong_content`, `large_response`

## MCP testing

```bash
agentbreak inspect    # discover tools from upstream MCP server
agentbreak serve      # proxy both LLM and MCP traffic
```

## CLI

```bash
agentbreak init       # create .agentbreak/ config
agentbreak serve      # start proxy
agentbreak validate   # check config
agentbreak inspect    # discover MCP tools
agentbreak verify     # run tests
```

## Claude Code skill

```bash
npx skills add mnvsk97/agentbreak
```

Then use `/agentbreak` to chaos-test your agent with a guided workflow.

## Examples

See [examples/](examples/) for sample agents and MCP servers with various auth configs.
