Metadata-Version: 2.4
Name: multi-agent-base
Version: 0.1.0b1
Summary: A reusable foundation for building multi-agent systems with observability, cost tracking, and A2A protocol support
Project-URL: Homepage, https://github.com/gokhandiker/multi-agent-base
Project-URL: Documentation, https://github.com/gokhandiker/multi-agent-base/tree/main/docs
Project-URL: Repository, https://github.com/gokhandiker/multi-agent-base
Project-URL: Issues, https://github.com/gokhandiker/multi-agent-base/issues
Author-email: Gokhan Diker <gokhandiker@gmail.com>
License-Expression: MIT
Keywords: a2a-protocol,agents,ai,llm,multi-agent,observability,opentelemetry,phoenix
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Requires-Dist: agent-framework>=1.0.0b0
Requires-Dist: arize-phoenix-otel>=0.5.0
Requires-Dist: arize-phoenix>=12.0.0
Requires-Dist: asyncio>=3.4.3
Requires-Dist: opentelemetry-api>=1.20.0
Requires-Dist: opentelemetry-sdk>=1.20.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: python-dotenv>=1.0.0
Requires-Dist: structlog>=24.0.0
Requires-Dist: tenacity>=8.0.0
Provides-Extra: all
Requires-Dist: agent-framework-devui>=1.0.0b0; extra == 'all'
Requires-Dist: aiohttp>=3.9.0; extra == 'all'
Requires-Dist: anthropic>=0.30.0; extra == 'all'
Requires-Dist: ollama>=0.3.0; extra == 'all'
Requires-Dist: openai>=1.0.0; extra == 'all'
Requires-Dist: redis>=5.0.0; extra == 'all'
Provides-Extra: anthropic
Requires-Dist: anthropic>=0.30.0; extra == 'anthropic'
Provides-Extra: dev
Requires-Dist: mypy>=1.8.0; extra == 'dev'
Requires-Dist: pre-commit>=3.6.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.23.0; extra == 'dev'
Requires-Dist: pytest-cov>=4.0.0; extra == 'dev'
Requires-Dist: pytest-mock>=3.12.0; extra == 'dev'
Requires-Dist: pytest>=8.0.0; extra == 'dev'
Requires-Dist: ruff>=0.3.0; extra == 'dev'
Provides-Extra: devui
Requires-Dist: agent-framework-devui>=1.0.0b0; extra == 'devui'
Requires-Dist: aiohttp>=3.9.0; extra == 'devui'
Provides-Extra: ollama
Requires-Dist: ollama>=0.3.0; extra == 'ollama'
Provides-Extra: openai
Requires-Dist: openai>=1.0.0; extra == 'openai'
Provides-Extra: redis
Requires-Dist: redis>=5.0.0; extra == 'redis'
Description-Content-Type: text/markdown

# Multi-Agent Base Framework

[![PyPI version](https://badge.fury.io/py/multi-agent-base.svg)](https://badge.fury.io/py/multi-agent-base)
[![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Tests](https://img.shields.io/badge/tests-1243%20passed-brightgreen.svg)](https://github.com/gokhandiker/multi-agent-base)
[![Coverage](https://img.shields.io/badge/coverage-67%25-yellowgreen.svg)](https://github.com/gokhandiker/multi-agent-base)

A production-ready Python framework for building multi-agent AI systems with comprehensive observability, cost tracking, resilience patterns, and A2A protocol support.

## 🚀 Installation

```bash
# Basic installation from PyPI
pip install multi-agent-base

# Or install from GitHub (latest)
pip install git+https://github.com/gokhandiker/multi-agent-base.git
```

### Optional Dependencies

```bash
# With specific provider support
pip install multi-agent-base[ollama]      # Ollama support
pip install multi-agent-base[openai]      # OpenAI support
pip install multi-agent-base[anthropic]   # Anthropic support

# With DevUI for debugging
pip install multi-agent-base[devui]

# With Redis support (for distributed caching/rate limiting)
pip install multi-agent-base[redis]

# All features
pip install multi-agent-base[all]

# Development (includes testing tools)
pip install multi-agent-base[all,dev]
```

## ⚡ Quick Start

### Basic Agent

```python
from multi_agent_base.core import AgentConfig
from multi_agent_base.providers import ModelClientFactory

# Create a simple agent
config = AgentConfig(
    name="assistant",
    model="gpt-4o-mini",
    provider="openai",
    system_prompt="You are a helpful assistant.",
)

# Use with your preferred client
client = ModelClientFactory.create(config)
response = await client.chat("Hello, how are you?")
print(response)
```

### With Observability (Phoenix Tracing)

```python
from multi_agent_base.observability import setup_phoenix, AgentLogger

# Setup Phoenix tracing
tracer = setup_phoenix(project_name="my-agents")

# Create logger
logger = AgentLogger()
logger.log_agent_start("assistant")
logger.log_llm_call(
    agent_name="assistant",
    model="gpt-4o",
    provider="openai",
    input_tokens=50,
    output_tokens=100,
    duration_ms=250.0,
)
logger.log_agent_end("assistant", duration_ms=500.0)
```

### With Memory

```python
from multi_agent_base.memory import BufferMemory, SlidingWindowMemory

# Simple buffer memory
memory = BufferMemory(max_entries=100)
await memory.add("user", "Hello!")
await memory.add("assistant", "Hi there!")
history = await memory.get_history()

# Sliding window (keeps last N messages)
memory = SlidingWindowMemory(window_size=10)
```

### With Resilience Patterns

```python
from multi_agent_base.resilience import (
    retry_with_backoff,
    CircuitBreaker,
    with_timeout,
    Fallback,
)

# Retry with exponential backoff
@retry_with_backoff(max_attempts=3, base_delay=1.0)
async def call_api():
    return await risky_operation()

# Circuit breaker for external services
breaker = CircuitBreaker(failure_threshold=5, recovery_timeout=30)

async with breaker:
    result = await external_service()

# Timeout wrapper
result = await with_timeout(slow_operation(), timeout=5.0)

# Fallback chain
fallback = Fallback(default="Service unavailable")
result = await fallback.execute(primary_service)
```

### With Cost Tracking

```python
from multi_agent_base.providers import PricingCalculator

# Calculate costs
cost = PricingCalculator.calculate(
    provider="openai",
    model="gpt-4o",
    input_tokens=1000,
    output_tokens=500,
)
print(f"Cost: ${cost:.4f}")
```

## ✨ Features

- 🤖 **Multi-Provider LLM Support**: Ollama, OpenAI, Anthropic via Microsoft Agent Framework
- 📊 **Full Observability**: Agent conversations, tool usage, inputs/outputs via Arize Phoenix
- 💰 **Cost Tracking**: Token usage and cost calculation per model/provider
- 🏗️ **Parametric Architectures**: SingleAgent, Supervisor, Swarm patterns
- 🎴 **A2A Agent Cards**: Agent metadata and capability declaration
- 🔍 **Skill Auto-Discovery**: Automatic skill extraction from tool functions
- 📝 **Structured Logging**: OpenTelemetry-based tracing
- 🧠 **Conversation Memory**: Multiple backends including buffer, sliding window, vector, and Redis
- 🔄 **Resilience Patterns**: Retry strategies, circuit breaker, timeout handling, fallbacks
- ⏱️ **Rate Limiting**: Token bucket, sliding window, and composite limiters
- 🗄️ **Response Caching**: LRU, TTL, and semantic caching strategies
- 📡 **Event System**: Pub/sub event bus for inter-agent communication
- 🔒 **Security**: Input validation, injection detection, permission management

## Installation

```bash
# Basic installation
pip install multi-agent-base

# With specific provider support
pip install multi-agent-base[ollama]
pip install multi-agent-base[openai]
pip install multi-agent-base[anthropic]

# All providers
pip install multi-agent-base[all]

# Development
pip install multi-agent-base[dev]
```

## Quick Start

### Single Agent

```python
from multi_agent_base import SystemConfig, SingleAgentPattern
from multi_agent_base.providers import ModelClientFactory

# Configure system
config = SystemConfig(
    provider="openai",
    model="gpt-4o-mini",
    observability_enabled=True,
)

# Create agent
pattern = SingleAgentPattern(config)
agent = pattern.create_agent(
    name="assistant",
    system_prompt="You are a helpful assistant.",
)

# Run
response = await agent.run("Hello, how are you?")
print(response)
```

### Supervisor Team

```python
from multi_agent_base import SystemConfig, SupervisorPattern

config = SystemConfig(
    provider="ollama",
    model="llama3.2",
    observability_enabled=True,
)

pattern = SupervisorPattern(config)
team = pattern.create_team(
    supervisor_name="manager",
    worker_configs=[
        {"name": "researcher", "system_prompt": "You research topics."},
        {"name": "writer", "system_prompt": "You write content."},
    ]
)

response = await team.run("Write a blog post about AI agents.")
```

### Agent Cards

```python
from multi_agent_base.a2a import AgentCard, SkillDiscoverer

# Auto-discover skills from tools
discoverer = SkillDiscoverer()
skills = discoverer.discover_from_tools([my_tool_function])

# Create agent card
card = AgentCard(
    name="research-agent",
    description="An agent that researches topics",
    skills=skills,
    capabilities=["text-generation", "web-search"],
)

# Export as JSON
card.to_json("agent_card.json")
```

### Memory System

```python
from multi_agent_base.memory import BufferMemory, MemoryConfig

# Create memory with configuration
memory = BufferMemory(MemoryConfig(max_entries=100))

# Store conversation
await memory.store(role="user", content="Hello!")
await memory.store(role="assistant", content="Hi there!")

# Retrieve history
history = await memory.retrieve(limit=10)
```

### Rate Limiting

```python
from multi_agent_base.ratelimit import RateLimiter, RateLimitConfig

# Create rate limiter
limiter = RateLimiter(RateLimitConfig(
    requests_per_minute=60,
    tokens_per_minute=10000,
))

# Check before making API calls
if await limiter.can_acquire():
    await limiter.acquire(tokens=100)
    # Make API call
```

### Resilience

```python
from multi_agent_base.resilience import retry_with_backoff, CircuitBreaker

# Retry with exponential backoff
@retry_with_backoff(max_attempts=3, delay=1.0)
async def call_api():
    return await risky_operation()

# Circuit breaker pattern
breaker = CircuitBreaker(failure_threshold=5, recovery_timeout=30)
async with breaker:
    result = await external_service()
```

### Security

```python
from multi_agent_base.security import (
    validate_input,
    check_injection,
    SecretManager,
)

# Input validation
validation = validate_input(user_input, max_length=1000)
if not validation.is_valid:
    raise ValueError(validation.errors)

# Injection detection
result = check_injection(user_input)
if not result.is_safe:
    log_security_event(result.threats)

# Secure secrets management
secrets = SecretManager()
api_key = secrets.get("OPENAI_API_KEY")
```

## Architecture

```
┌─────────────────────────────────────────────────────────────────┐
│                     Multi-Agent Base                            │
├─────────────────────────────────────────────────────────────────┤
│  Patterns                                                       │
│  ┌──────────────┐ ┌──────────────┐ ┌──────────────┐            │
│  │ SingleAgent  │ │  Supervisor  │ │    Swarm     │            │
│  └──────────────┘ └──────────────┘ └──────────────┘            │
├─────────────────────────────────────────────────────────────────┤
│  Cross-Cutting Concerns                                         │
│  ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐  │
│  │ Memory  │ │Resilience│ │  Rate   │ │  Cache  │ │Security │  │
│  │         │ │         │ │ Limiting│ │         │ │         │  │
│  └─────────┘ └─────────┘ └─────────┘ └─────────┘ └─────────┘  │
├─────────────────────────────────────────────────────────────────┤
│  Core Services                                                  │
│  ┌──────────────┐ ┌──────────────┐ ┌──────────────┐            │
│  │  A2A Cards   │ │    Cost      │ │ Observability│            │
│  │ & Discovery  │ │   Tracker    │ │   (Phoenix)  │            │
│  └──────────────┘ └──────────────┘ └──────────────┘            │
│  ┌──────────────┐                                               │
│  │ Event System │                                               │
│  └──────────────┘                                               │
├─────────────────────────────────────────────────────────────────┤
│  LLM Providers (via Microsoft Agent Framework)                  │
│  ┌──────────────┐ ┌──────────────┐ ┌──────────────┐            │
│  │   Ollama     │ │   OpenAI     │ │  Anthropic   │            │
│  └──────────────┘ └──────────────┘ └──────────────┘            │
└─────────────────────────────────────────────────────────────────┘
```

## Documentation

- [Getting Started](docs/getting-started.md)
- [Configuration Guide](docs/configuration.md)
- [Architecture Patterns](docs/patterns.md)
- [A2A Agent Cards](docs/a2a-agent-cards.md)
- [Observability](docs/observability.md)
- [Cost Tracking](docs/cost-tracking.md)
- [Memory System](docs/memory.md)
- [Resilience Patterns](docs/resilience.md)
- [Rate Limiting](docs/rate-limiting.md)
- [Caching](docs/caching.md)
- [Event System](docs/events.md)
- [Security](docs/security.md)
- [API Reference](docs/api-reference.md)

## Development

```bash
# Clone repository
git clone https://github.com/gokhandiker/multi-agent-base.git
cd multi-agent-base

# Create virtual environment
python -m venv .venv
source .venv/bin/activate

# Install with dev dependencies
pip install -e ".[dev,all]"

# Run tests
pytest

# Run linting
ruff check src tests
mypy src
```

## 📦 Version History

| Version | Date | Changes |
|---------|------|---------|
| 0.1.0b1 | 2026-02-02 | Initial beta release with 18 modules, 1243 tests |

## 🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

## 📄 License

MIT License - See [LICENSE](LICENSE) for details.
