Metadata-Version: 2.4
Name: axel-protocol
Version: 2.3.7
Summary: AXEL — Agent eXchange Language: a universal protocol for multi-LLM networks
Project-URL: Homepage, https://github.com/sectorx/axel-protocol
Project-URL: Repository, https://github.com/sectorx/axel-protocol
Project-URL: Issues, https://github.com/sectorx/axel-protocol/issues
Project-URL: Changelog, https://github.com/sectorx/axel-protocol/blob/main/CHANGELOG.md
Author-email: Sector X <sector11x@gmail.com>
License: MIT License
        
        Copyright (c) 2024 Sector X
        
        Permission is hereby granted, free of charge, to any person obtaining a copy
        of this software and associated documentation files (the "Software"), to deal
        in the Software without restriction, including without limitation the rights
        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
        copies of the Software, and to permit persons to whom the Software is
        furnished to do so, subject to the following conditions:
        
        The above copyright notice and this permission notice shall be included in all
        copies or substantial portions of the Software.
        
        THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
        SOFTWARE.
License-File: LICENSE
Keywords: agents,ai,anthropic,communication,llm,multi-agent,ollama,openai,protocol
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.10
Requires-Dist: fastapi>=0.110.0
Requires-Dist: uvicorn[standard]>=0.29.0
Provides-Extra: all
Requires-Dist: anthropic>=0.25.0; extra == 'all'
Requires-Dist: cohere>=5.0.0; extra == 'all'
Requires-Dist: google-generativeai>=0.7.0; extra == 'all'
Requires-Dist: groq>=0.9.0; extra == 'all'
Requires-Dist: litellm>=1.40.0; extra == 'all'
Requires-Dist: mistralai>=1.0.0; extra == 'all'
Requires-Dist: openai>=1.20.0; extra == 'all'
Requires-Dist: together>=1.2.0; extra == 'all'
Provides-Extra: anthropic
Requires-Dist: anthropic>=0.25.0; extra == 'anthropic'
Provides-Extra: bedrock
Requires-Dist: boto3>=1.34.0; extra == 'bedrock'
Provides-Extra: cohere
Requires-Dist: cohere>=5.0.0; extra == 'cohere'
Provides-Extra: dev
Requires-Dist: anthropic>=0.25.0; extra == 'dev'
Requires-Dist: cohere>=5.0.0; extra == 'dev'
Requires-Dist: google-generativeai>=0.7.0; extra == 'dev'
Requires-Dist: groq>=0.9.0; extra == 'dev'
Requires-Dist: httpx>=0.27; extra == 'dev'
Requires-Dist: litellm>=1.40.0; extra == 'dev'
Requires-Dist: mistralai>=1.0.0; extra == 'dev'
Requires-Dist: mypy>=1.9; extra == 'dev'
Requires-Dist: openai>=1.20.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.23; extra == 'dev'
Requires-Dist: pytest>=8.0; extra == 'dev'
Requires-Dist: ruff>=0.4; extra == 'dev'
Requires-Dist: together>=1.2.0; extra == 'dev'
Provides-Extra: gemini
Requires-Dist: google-generativeai>=0.7.0; extra == 'gemini'
Provides-Extra: groq
Requires-Dist: groq>=0.9.0; extra == 'groq'
Provides-Extra: litellm
Requires-Dist: litellm>=1.40.0; extra == 'litellm'
Provides-Extra: mistral
Requires-Dist: mistralai>=1.0.0; extra == 'mistral'
Provides-Extra: ollama
Provides-Extra: openai
Requires-Dist: openai>=1.20.0; extra == 'openai'
Provides-Extra: together
Requires-Dist: together>=1.2.0; extra == 'together'
Description-Content-Type: text/markdown

# AXEL — Agent eXchange Language

> **A universal protocol for multi-LLM networks.**
> Connect Claude, GPT, Llama, Gemini — any model — into a single communicating, learning network.

[![PyPI version](https://badge.fury.io/py/axel-protocol.svg)](https://pypi.org/project/axel-protocol/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
[![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
[![CI](https://github.com/Sector11x/axel-protocol/actions/workflows/ci.yml/badge.svg)](https://github.com/Sector11x/axel-protocol/actions)

---

## What is AXEL?

AXEL is a **structured message protocol** and **HTTP network layer** that lets AI agents from different providers communicate, collaborate, and share knowledge — without being locked into a single vendor.

```
┌─────────────┐    TK/OK/LS    ┌──────────────┐    TK/OK/LS    ┌─────────────┐
│  Claude 3.5 │ ◄────────────► │  AXEL Server │ ◄────────────► │  GPT-4o     │
│  researcher │                │  (the hub)   │                │  writer     │
└─────────────┘                └──────┬───────┘                └─────────────┘
                                      │
                               ┌──────▼───────┐
                               │  Llama 3     │
                               │  reviewer    │
                               └──────────────┘
```

Each agent:
- **announces** its capabilities to the network
- **receives tasks** via structured `TK` messages
- **returns results** as `OK` messages
- **shares lessons** with all peers via `LS` (learn-share) broadcast
- **discovers** the best agent for any capability via `/discover/<action>`

### The FLM idea

A network of coordinated agents, each specialised, routing tasks to the right model — starts to look less like "several chatbots" and more like a single distributed intelligence. We call this a **Fractionalized Language Model (FLM)**: the "model" is the inference process across the network, not any individual weight file.

AXEL is the message bus that makes FLMs possible today, with any combination of hosted or local models.

---

## Quick start

### 0. Get API access (one key for everything — recommended)

The fastest way to access every major LLM (GPT-4o, Claude, Gemini, Llama, Mistral, and 100+ more) is **[OpenRouter](https://openrouter.ai)** — one free sign-up, one API key, no per-provider billing setup.

```bash
# Sign up at https://openrouter.ai, then:
export OPENROUTER_API_KEY=sk-or-...
```

AXEL also supports direct provider keys if you already have them. Run the setup wizard after install:

```bash
axel setup    # interactive key configuration
```

Alternatively, set individual provider env vars: `ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, `GOOGLE_API_KEY`, etc.

---

### 1. Install

**One-click installer** (installs everything, fixes PATH, prompts for API key):

```bash
curl -sSL https://raw.githubusercontent.com/Sector11x/axel-protocol/main/install.sh | zsh
```

**Or with pip:**

```bash
pip install axel-protocol
```

Or from source:

```bash
git clone https://github.com/Sector11x/axel-protocol
cd axel-protocol
pip install -e ".[dev]"
```

### 2. Start the network server

```bash
axel-server          # starts on http://localhost:7331
axel-server --port 8080 --host 0.0.0.0
```

Or with Docker:

```bash
docker compose up
```

### 3. Connect agents

**Option A — via OpenRouter (one key, any model):**

```python
from axel import OpenRouterAdapter, AXELBridge

bridge = AXELBridge()

# Free models — no billing required
bridge.add_openrouter("researcher", ["research", "summarize"],
                      model="llama-free")   # shortcut for llama-3.1-8b:free

bridge.add_openrouter("writer", ["draft_content", "write"],
                      model="gemma-free")   # shortcut for gemma-2-9b:free

# Premium models via the same key
bridge.add_openrouter("analyst", ["analyze", "report"],
                      model="claude-sonnet")  # anthropic/claude-3-5-sonnet
```

**Option B — direct provider keys:**

```python
from axel import AXELClient

# Connect a researcher agent (Claude via Anthropic)
researcher = AXELClient(
    "http://localhost:7331",
    agent_id="researcher",
    model="claude-3-5-haiku-20241022",
    provider="anthropic",
    caps=["research", "summarize"],
)

# Connect a writer agent (GPT-4o-mini via OpenAI)
writer = AXELClient(
    "http://localhost:7331",
    agent_id="writer",
    model="gpt-4o-mini",
    provider="openai",
    caps=["draft_content", "write", "edit"],
)

# Send a task from researcher to writer
result = researcher.execute("writer", "draft_content", {
    "topic": "distributed AI inference",
    "format": "blog post",
    "length": "medium",
})
print(result)
```

### 4. Run a pipeline

```python
# researcher → writer → reviewer in one call
results = researcher.chain([
    {"to": "researcher", "act": "research",      "args": {"topic": "AXEL FLM architecture"}},
    {"to": "writer",     "act": "draft_content", "args": {"format": "technical brief"}},
    {"to": "reviewer",   "act": "review",        "args": {"criteria": "clarity, accuracy"}},
])
```

### 5. Open the live dashboard

While the server is running, open your browser to:

```
http://localhost:7331/ui
```

The dashboard shows real-time agent activity, memory, message flows, and network health.

---

## Message types

AXEL defines **23 message types** across two groups:

| Type | Name | Direction | Purpose |
|------|------|-----------|---------|
| `TK` | Task | any → any | Request work from an agent |
| `OK` | Result | agent → caller | Successful task response |
| `ER` | Error | agent → caller | Failed task response |
| `LS` | Lesson | any → `*` | Broadcast a learned insight |
| `MR` | Memory read | agent → server | Query shared memory |
| `MW` | Memory write | agent → server | Store a lesson |
| `QR` | Query response | server → agent | Return memory results |
| `PP` | Ping | any → any | Health check |
| `PA` | Pong | any → any | Health response |
| `HK` | Handoff | agent → agent | Transfer conversation context |
| `NT` | Note | any → any | Non-blocking annotation |
| `FT` | Feedback | any → any | Score an agent's response |
| `ST` | Status | agent → server | Capability update |
| `AB` | Abort | any → any | Cancel a task |
| `RT` | Retry | any → any | Retry a failed task |
| `BK` | Bookmark | any → server | Save context checkpoint |
| `RS` | Restore | any → server | Load context checkpoint |
| `AN` | Announce | agent → server | Register agent + capabilities |
| `CP` | Capability | server → agent | Discovery response |
| `SP` | Subscribe | agent → server | Subscribe to a channel |
| `SB` | Subscriber list | server → agent | Confirm subscription |
| `PB` | Publish | agent → server | Publish to a channel |
| `CH` | Chain | agent → server | Kick off a pipeline |

Full schema for every message type: [`docs/message-schemas.md`](docs/message-schemas.md)

---

## Connecting real LLMs

By default the server uses `MockAdapter` (instant fake responses — good for testing).
Switch to real models by calling the adapter endpoints or using the Python API:

### Claude (Anthropic)

```python
import os
server.bridge.add_anthropic(
    "researcher",
    model="claude-3-5-haiku-20241022",
    api_key=os.environ["ANTHROPIC_API_KEY"],
    caps=["research", "summarize"],
)
```

### GPT-4 (OpenAI)

```python
server.bridge.add_openai(
    "writer",
    model="gpt-4o-mini",
    api_key=os.environ["OPENAI_API_KEY"],
    caps=["draft_content", "write"],
)
```

### Llama (Ollama — free, local, private)

```bash
ollama pull llama3   # one-time download
ollama serve         # keep running
```

```python
server.bridge.add_ollama(
    "reviewer",
    model="llama3",
    endpoint="http://localhost:11434",
    caps=["review", "classify"],
)
```

Any combination works. One team could use Claude for research, GPT for writing, and a local Llama for review — all on the same network, sharing the same memory.

---

## CLI reference

After install, the `axel` command provides everything you need from the terminal:

| Command | Description |
|---------|-------------|
| `axel setup` | Interactive wizard — configure API keys, saved to `~/.axel/config.json` |
| `axel demo` | Run a live two-agent demo (researcher + writer) using OpenRouter |
| `axel models` | Browse available OpenRouter models |
| `axel models --free` | Show only free-tier models |
| `axel status` | Server health, uptime, and connected agents |
| `axel agents` | List all registered agents |
| `axel send` | Send a one-off AXEL message from the terminal |
| `axel memory` | View shared memory / lessons |
| `axel discover` | Find the best registered agent for a capability |
| `axel --help` | Full command reference |

The server itself is started with:

```bash
axel-server                          # default: http://localhost:7331
axel-server --port 8080 --host 0.0.0.0
```

---

## Architecture

```
axel/
├── core.py        # AXELMessage, AXELBuilder — pure message construction, no I/O
├── server.py      # FastAPI HTTP server — the network hub
├── client.py      # Pure-stdlib HTTP client — connect any Python script
└── learning.py    # SmartMemory — BM25 search, confidence decay, leaderboard

examples/
├── demo_live.py   # Full 8-step live demo (works with mock adapters, no keys needed)
└── monitor.html   # Browser dashboard — SSE live feed + status polling

tests/
├── test_core.py   # Message building and parsing
├── test_server.py # Server endpoints (httpx + pytest-asyncio)
└── test_client.py # Client SDK integration tests
```

### Server endpoints

| Endpoint | Method | Description |
|----------|--------|-------------|
| `/agents/announce` | POST | Register an agent |
| `/agents` | GET | List all agents |
| `/discover/{action}` | GET | Find best agent for a capability |
| `/execute` | POST | Run a TK or CH message (returns result) |
| `/send` | POST | Fire-and-forget message delivery |
| `/inbox/{id}` | GET | Long-poll for one message |
| `/inbox/{id}/all` | GET | Drain all queued messages |
| `/memory` | GET | List shared lessons |
| `/memory/search` | GET | BM25 search across lessons |
| `/memory/write` | POST | Store a lesson |
| `/channels/subscribe` | POST | Subscribe to a pub/sub channel |
| `/channels/publish` | POST | Publish to a channel |
| `/status` | GET | Full network health + stats |
| `/stream` | GET | Server-Sent Events live feed |
| `/docs` | GET | Interactive Swagger UI |

---

## Running the demo

No API keys needed — MockAdapters simulate instant responses:

```bash
python examples/demo_live.py
```

Expected output:
```
════════════════════════════════════════════════════════════════
  AXEL Live Network Demo  —  http://127.0.0.1:7331
════════════════════════════════════════════════════════════════

  Starting AXEL Network Server…
  ✓  Server online  →  http://127.0.0.1:7331
  ✓  Swagger docs   →  http://127.0.0.1:7331/docs
  ✓  Live stream    →  http://127.0.0.1:7331/stream

────────────────────────────────────────────────────────────────
  1 · Agents coming online
────────────────────────────────────────────────────────────────
  ✓  4 agents registered on network
  ...

  Live network demo complete!
```

---

## Self-contained agent

Subclass `AXELAgent` to create an agent that processes messages automatically:

```python
from axel import AXELAgent

class Researcher(AXELAgent):
    @AXELAgent.task_handler("research")
    def do_research(self, args, ctx):
        topic = args.get("topic", "unknown")
        # Call your LLM here
        return {"summary": f"Research on {topic}: ...", "sources": []}

    @AXELAgent.task_handler("summarize")
    def do_summarize(self, args, ctx):
        return {"summary": args.get("text", "")[:200]}

agent = Researcher(
    "http://localhost:7331",
    agent_id="researcher",
    model="claude-3-5-haiku-20241022",
    provider="anthropic",
    caps=["research", "summarize"],
)
agent.run()  # blocks and processes messages
```

---

## Shared memory & learning

Every agent can read and write to a **shared memory store**. Lessons propagate across the network automatically via `LS` broadcast:

```python
# Share a lesson with all agents
researcher.learn(
    key="research",
    insight="Multi-hop queries surface 3× more context than single-step queries",
    confidence=0.92,
)

# Search shared memory (BM25 ranking)
hits = researcher.memory_search("research context queries", n=5)
for h in hits:
    print(f"[{h['source']}] ({h['confidence']:.2f})  {h['insight']}")
```

Memory is persistent across agent restarts when using file-backed storage (configured in `axel-server --memory-path`).

---

## Docker

```bash
# Start the server
docker compose up

# Server at http://localhost:7331
# Swagger at http://localhost:7331/docs
# SSE stream at http://localhost:7331/stream
```

Environment variables:

| Variable | Default | Description |
|----------|---------|-------------|
| `AXEL_HOST` | `0.0.0.0` | Bind address |
| `AXEL_PORT` | `7331` | Port |
| `AXEL_LOG_LEVEL` | `info` | Logging level |
| `AXEL_MEMORY_PATH` | `~/.axel/memory.json` | Persistent memory file |
| `ANTHROPIC_API_KEY` | — | For Claude adapters |
| `OPENAI_API_KEY` | — | For GPT adapters |

---

## Contributing

AXEL is MIT-licensed and welcomes contributions. The highest-impact areas are:

- **New LLM adapters** — Gemini, Mistral, Cohere, Together, Groq
- **Persistent memory backends** — Redis, SQLite, Postgres
- **Agent templates** — pre-built Researcher, Writer, Coder, Reviewer classes
- **Language ports** — TypeScript/Node.js client, Rust client

See [`CONTRIBUTING.md`](CONTRIBUTING.md) to get started.

---

## Roadmap

- [x] Core protocol (23 message types)
- [x] HTTP network server (FastAPI)
- [x] Python client SDK (stdlib only)
- [x] MockAdapter, AnthropicAdapter, OpenAIAdapter, OllamaAdapter
- [x] **OpenRouterAdapter** — one key, 100+ models, free tier
- [x] GeminiAdapter, MistralAdapter, GroqAdapter, TogetherAdapter, CohereAdapter, BedrockAdapter, LiteLLMAdapter
- [x] Shared memory with BM25 search
- [x] Pub/Sub channels
- [x] Chain pipeline execution
- [x] Live SSE dashboard (`/ui`)
- [x] SQLite persistence (`--db` flag)
- [x] Agent liveness tracking
- [x] `axel` CLI (status / agents / send / memory / discover / models / setup)
- [x] `axel setup` — interactive API key wizard
- [ ] TypeScript/Node.js client
- [ ] Agent templates library
- [ ] gRPC transport option
- [ ] AXEL Studio (visual network editor)

---

## License

MIT — see [LICENSE](LICENSE).
Copyright © 2024 Sector X. Free to use, modify, and build upon.

---

*Built to make every model smarter by letting them think together.*
