Metadata-Version: 2.4
Name: aixtools
Version: 0.1.10
Summary: Tools for AI exploration and debugging
Requires-Python: >=3.11.2
Description-Content-Type: text/markdown
Requires-Dist: a2a-sdk>=0.3.1
Requires-Dist: cachebox>=5.0.1
Requires-Dist: chainlit>=2.5.5
Requires-Dist: colorlog>=6.9.0
Requires-Dist: fasta2a>=0.5.0
Requires-Dist: fastmcp>=2.10.2
Requires-Dist: hvac>=2.3.0
Requires-Dist: ipykernel>=6.29.5
Requires-Dist: langchain-chroma>=0.2.3
Requires-Dist: langchain-ollama>=0.3.2
Requires-Dist: langchain-openai>=0.3.14
Requires-Dist: mcp>=1.11.0
Requires-Dist: pandas>=2.2.3
Requires-Dist: pydantic-ai>=0.4.10
Requires-Dist: pylint>=3.3.7
Requires-Dist: rich>=14.0.0
Requires-Dist: ruff>=0.11.6
Requires-Dist: streamlit>=1.44.1
Requires-Dist: watchdog>=6.0.0
Provides-Extra: test
Requires-Dist: pyyaml; extra == "test"
Provides-Extra: feature
Requires-Dist: logfire; extra == "feature"

# AIXtools

AIXtools is a comprehensive Python library for AI agent development, debugging, and deployment. It provides a complete toolkit for building, testing, and monitoring AI agents with support for multiple model providers, advanced logging, and agent-to-agent communication.

## Capabilities

Agents
- Agent Development & Management - `aixtools/agents/`
- Agent Batch Processing - `aixtools/agents/agent_batch.py`
- Agent Prompting System - `aixtools/agents/prompt.py`

A2A
- Agent-to-Agent Communication (A2A) - `aixtools/a2a/`
- Google SDK Integration for A2A - `aixtools/a2a/google_sdk/`
- PydanticAI Adapter for Google SDK - `aixtools/a2a/google_sdk/pydantic_ai_adapter/`

Databases
- Database Integration - `aixtools/db/`
- Vector Database Support - `aixtools/db/vector_db.py`

Logging & Debugging
- Log Viewing Application - `aixtools/log_view/`
- Object Logging System - `aixtools/logging/`
- Model Patch Logging - `aixtools/logging/model_patch_logging.py`
- Log Filtering System - `aixtools/logfilters/`
- FastMCP Logging - `aixtools/mcp/fast_mcp_log.py`
- Command Line Interface for Log Viewing - Entry point: `log_view`
- MCP (Model Context Protocol) Support - `aixtools/logging/mcp_log_models.py`, `aixtools/logging/mcp_logger.py`

Testing & Tools
- Testing Utilities - `aixtools/testing/`
- Mock Tool System - `aixtools/testing/mock_tool.py`
- Model Patch Caching - `aixtools/testing/model_patch_cache.py`
- Tool Doctor System - `aixtools/tools/doctor/`
- Tool Recommendation Engine - `aixtools/tools/doctor/tool_recommendation.py`
- FaultyMCP - `aixtools/mcp/faulty_mcp.py`

Chainlit & HTTP Server
- Chainlit Integration - `aixtools/app.py`, `aixtools/chainlit.md`
- Chainlit Utilities - `aixtools/utils/chainlit/`
- HTTP Server Framework - `aixtools/server/`
- App Mounting System - `aixtools/server/app_mounter.py`

Programming utils
- Persisted Dictionary - `aixtools/utils/persisted_dict.py`
- Enum with Description - `aixtools/utils/enum_with_description.py`
- Context Management - `aixtools/context.py`
- Configuration Management - `aixtools/utils/config.py`, `aixtools/utils/config_util.py`
- File Utilities - `aixtools/utils/files.py`

## Installation

### From GitHub

```bash
uv add aixtools
```

### Development Setup

```bash
# Create a new project
uv init MyNewProject
cd MyNewProject

# Add virtual environment and activate it
uv venv .venv
source .venv/bin/activate

# Add this package
uv add aixtools
```

### Updating

```bash
uv add --upgrade aixtools
```

## Environment Configuration

AIXtools requires environment variables for model providers. 

**IMPORTANT:** Create a `.env` file based on [`.env_template`](./.env_template):

```bash
# Model family (azure, openai, or ollama)
MODEL_FAMILY=azure
MODEL_TIMEOUT=120

# Azure OpenAI
AZURE_OPENAI_ENDPOINT=https://your_endpoint.openai.azure.com
AZURE_OPENAI_API_VERSION=2024-06-01
AZURE_OPENAI_API_KEY=your_secret_key
AZURE_MODEL_NAME=gpt-4o

# OpenAI
OPENAI_MODEL_NAME=gpt-4.5-preview
OPENAI_API_KEY=openai_api_key

# Ollama
OLLAMA_MODEL_NAME=llama3.2:3b-instruct-fp16
OLLAMA_LOCAL_URL=http://localhost:11434/v1
```

## Agents

### Basic Agent Usage

```python
from aixtools.agents.agent import get_agent, run_agent

async def main():
    agent = get_agent(system_prompt="You are a helpful assistant.")
    result, nodes = await run_agent(agent, "Explain quantum computing")
    print(result)
```

### Agent Development & Management

The agent system provides a unified interface for creating and managing AI agents across different model providers.

```python
from aixtools.agents.agent import get_agent, run_agent

# Create an agent with default model
agent = get_agent(system_prompt="You are a helpful assistant.")

# Run the agent
result, nodes = await run_agent(agent, "Tell me about AI")
```

### Agent Batch Processing

Process multiple agent queries simultaneously with built-in concurrency control and result aggregation.

```python
from aixtools.agents.agent_batch import agent_batch, AgentQueryParams

# Create query parameters
query_parameters = [
    AgentQueryParams(prompt="What is the meaning of life"),
    AgentQueryParams(prompt="Who is the prime minister of Canada")
]

# Run queries in batches
async for result in agent_batch(query_parameters):
    print(result)
```

## A2A (Agent-to-Agent Communication)

The A2A module provides a comprehensive framework for enabling sophisticated communication between AI agents across different environments and platforms. It includes Google SDK integration, PydanticAI adapters, and FastA2A application conversion capabilities.

### Core Features

**Agent Application Conversion**
- Convert PydanticAI agents into FastA2A applications
- Support for session metadata extraction and context management
- Custom worker classes with enhanced data part support
- Automatic handling of user and session identification

**Remote Agent Connections**
- Establish connections between agents across different environments
- Asynchronous message sending with task polling capabilities
- Terminal state detection and error handling
- Support for various message types including text, files, and data

**Google SDK Integration**
- Native integration with Google's A2A SDK
- Card-based agent representation and discovery
- PydanticAI adapter for seamless Google SDK compatibility
- Storage and execution management for agent interactions

### Agent-to-Agent Communication (A2A)

Enable sophisticated agent interactions with Google SDK integration and PydanticAI adapters.

```python
from aixtools.a2a.google_sdk.remote_agent_connection import RemoteAgentConnection
from aixtools.a2a.app import agent_to_a2a

# Convert a PydanticAI agent to FastA2A application
a2a_app = agent_to_a2a(
    agent=my_agent,
    name="MyAgent",
    description="A helpful AI assistant",
    skills=[{"name": "chat", "description": "General conversation"}]
)

# Connect agents across different environments
connection = RemoteAgentConnection(card=agent_card, client=a2a_client)
response = await connection.send_message_with_polling(message)
```

## Databases

### Database Integration

Support for both traditional and vector databases with seamless integration.

```python
from aixtools.db.database import Database
from aixtools.db.vector_db import VectorDB

# Traditional database
db = Database("sqlite:///app.db")

# Vector database for embeddings
vector_db = VectorDB()
vector_db.add_documents(documents)
```

## Logging & Debugging

AixTools provides functionality for logging and debugging.

### Basic Logging and Debugging

```python
from aixtools.agents.agent import get_agent, run_agent

async def main():
    # Create an agent
    agent = get_agent(system_prompt="You are a helpful assistant.")
    
    # Run agent - logging is automatic via ObjectLogger
    result, nodes = await run_agent(
        agent,
        "Explain quantum computing",
        debug=True,  # Enable debug logging
        log_model_requests=True  # Log model requests/responses
    )
    
    print(f"Result: {result}")
    print(f"Logged {len(nodes)} nodes")
```

### Log Viewing Application

Interactive Streamlit application for analyzing logged objects and debugging agent behavior.

**Features:**
- Log file selection and filtering
- Node visualization with expand/collapse
- Export capabilities to JSON
- Regex pattern matching
- Real-time log monitoring

```bash
# Run the log viewer
log_view

# Or specify custom log directory
log_view /path/to/logs
```

### Object Logging & Debugging

Advanced logging system with object serialization and visual debugging tools.

```python
from aixtools.logging.log_objects import ObjectLogger

# Log any pickleable object
with ObjectLogger() as logger:
    logger.log({"message": "Hello, world!"})
    logger.log(agent_response)
```

### MCP Logger

This is an MCP server that can log MCP requests and responses.

```python
from aixtools.mcp.fast_mcp_log import FastMcpLog

# Use FastMCP server with logging
mcp = FastMcpLog("Demo")
```

### Model Patching System

Dynamic model behavior modification for testing and debugging.

```python
from aixtools.model_patch.model_patch import ModelPatch

# Apply patches to models for testing
with ModelPatch() as patch:
    patch.apply_response_override("test response")
    result = await agent.run("test prompt")
```

### FaultyMCP

A specialized MCP server designed for testing error handling and resilience in MCP client implementations. FaultyMCP simulates various failure scenarios including network errors, server crashes, and random exceptions.

**Features:**
- Configurable error probabilities for different request types
- HTTP 404 error injection for POST/DELETE requests
- Server crash simulation on GET requests
- Random exception throwing in tool operations
- MCP-specific error simulation (ValidationError, ResourceError, etc.)
- Safe mode for controlled testing

```python
from aixtools.mcp.faulty_mcp import run_server_on_port, config

# Configure error probabilities
config.prob_on_post_404 = 0.3      # 30% chance of 404 on POST
config.prob_on_get_crash = 0.1     # 10% chance of crash on GET
config.prob_in_list_tools_throw = 0.2  # 20% chance of exception in tools/list

# Run the faulty server
run_server_on_port()
```

**Command Line Usage:**
```bash
# Run with default error probabilities
python -m aixtools.mcp.faulty_mcp

# Run in safe mode (no errors by default)
python -m aixtools.mcp.faulty_mcp --safe-mode

# Custom configuration
python -m aixtools.mcp.faulty_mcp \
    --port 8888 \
    --prob-on-post-404 0.2 \
    --prob-on-get-crash 0.1 \
    --prob-in-list-tools-throw 0.3
```

By default, the "FaultyMCP" includes several tools you can use in your tests:
- `add(a, b)` - Basic addition (reliable)
- `multiply(a, b)` - Basic multiplication (reliable)
- `always_error()` - Always throws an exception
- `random_throw_exception(a, b, prob)` - Randomly throws exceptions
- `freeze_server(seconds)` - Simulates server freeze
- `throw_404_exception()` - Throws HTTP 404 error

## Testing & Tools

AIXtools provides comprehensive testing utilities and diagnostic tools for AI agent development and debugging.

### Testing Utilities

The testing module provides mock tools, model patching, and test utilities for comprehensive agent testing.

```python
from aixtools.testing.mock_tool import MockTool
from aixtools.testing.model_patch_cache import ModelPatchCache
from aixtools.testing.aix_test_model import AixTestModel

# Create mock tools for testing
mock_tool = MockTool(name="test_tool", response="mock response")

# Use model patch caching for consistent test results
cache = ModelPatchCache()
cached_response = cache.get_cached_response("test_prompt")

# Test model for controlled testing scenarios
test_model = AixTestModel()
```

### Tool Doctor System

Automated tool analysis and recommendation system for optimizing agent tool usage.

```python
from aixtools.tools.doctor.tool_doctor import ToolDoctor
from aixtools.tools.doctor.tool_recommendation import ToolRecommendation

# Analyze tool usage patterns
doctor = ToolDoctor()
analysis = doctor.analyze_tools(agent_logs)

# Get tool recommendations
recommendation = ToolRecommendation()
suggestions = recommendation.recommend_tools(agent_context)
```

### Mock Tool System

Create and manage mock tools for testing agent behavior without external dependencies.

```python
from aixtools.testing.mock_tool import MockTool

# Create a mock tool with predefined responses
mock_calculator = MockTool(
    name="calculator",
    description="Performs mathematical calculations",
    response_map={
        "2+2": "4",
        "10*5": "50"
    }
)

# Use in agent testing
agent = get_agent(tools=[mock_calculator])
result = await run_agent(agent, "What is 2+2?")
```

### Model Patch Caching

Cache model responses for consistent testing and development workflows.

```python
from aixtools.testing.model_patch_cache import ModelPatchCache

# Initialize cache
cache = ModelPatchCache(cache_dir="./test_cache")

# Cache responses for specific prompts
cache.cache_response("test prompt", "cached response")

# Retrieve cached responses
response = cache.get_cached_response("test prompt")
```

### FaultyMCP Testing Server

Specialized MCP server for testing error handling and resilience in MCP implementations.

```python
from aixtools.mcp.faulty_mcp import run_server_on_port, config

# Configure error probabilities for testing
config.prob_on_post_404 = 0.3      # 30% chance of 404 on POST
config.prob_on_get_crash = 0.1     # 10% chance of crash on GET
config.prob_in_list_tools_throw = 0.2  # 20% chance of exception

# Run the faulty server for testing
run_server_on_port(port=8888)
```

**Available Test Tools:**
- `add(a, b)` - Reliable addition operation
- `multiply(a, b)` - Reliable multiplication operation
- `always_error()` - Always throws an exception
- `random_throw_exception(a, b, prob)` - Randomly throws exceptions
- `freeze_server(seconds)` - Simulates server freeze
- `throw_404_exception()` - Throws HTTP 404 error

**Command Line Usage:**
```bash
# Run with default error probabilities
python -m aixtools.mcp.faulty_mcp

# Run in safe mode (no errors)
python -m aixtools.mcp.faulty_mcp --safe-mode

# Custom configuration
python -m aixtools.mcp.faulty_mcp \
    --port 8888 \
    --prob-on-post-404 0.2 \
    --prob-on-get-crash 0.1
```

### Running Tests

Execute the test suite using the provided scripts:

```bash
# Run all tests
./scripts/test.sh

# Run unit tests only
./scripts/test_unit.sh

# Run integration tests only
./scripts/test_integration.sh
```

## Chainlit & HTTP Server

### Chainlit Integration

Ready-to-use Chainlit application for interactive agent interfaces.

```python
# Run the Chainlit app
# Configuration in aixtools/chainlit.md
# Main app in aixtools/app.py
```

## Programming Utils

AIXtools provides essential programming utilities for configuration management, data persistence, file operations, and context handling.

### Persisted Dictionary

Persistent key-value storage with automatic serialization and file-based persistence.

```python
from aixtools.utils.persisted_dict import PersistedDict

# Create a persistent dictionary
cache = PersistedDict("cache.json")

# Store and retrieve data
cache["user_preferences"] = {"theme": "dark", "language": "en"}
cache["session_data"] = {"last_login": "2024-01-01"}

# Data is automatically saved to file
print(cache["user_preferences"])  # Persists across program restarts
```

### Enum with Description

Enhanced enum classes with built-in descriptions for better documentation and user interfaces.

```python
from aixtools.utils.enum_with_description import EnumWithDescription

class ModelType(EnumWithDescription):
    GPT4 = ("gpt-4", "OpenAI GPT-4 model")
    CLAUDE = ("claude-3", "Anthropic Claude-3 model")
    LLAMA = ("llama-2", "Meta LLaMA-2 model")

# Access enum values and descriptions
print(ModelType.GPT4.value)        # "gpt-4"
print(ModelType.GPT4.description)  # "OpenAI GPT-4 model"

# Get all descriptions
for model in ModelType:
    print(f"{model.value}: {model.description}")
```

### Context Management

Centralized context management for sharing state across components.

```python
from aixtools.context import Context

# Create and use context
context = Context()
context.set("user_id", "12345")
context.set("session_data", {"preferences": {"theme": "dark"}})

# Retrieve context data
user_id = context.get("user_id")
session_data = context.get("session_data")

# Context can be passed between components
def process_request(ctx: Context):
    user_id = ctx.get("user_id")
    # Process with user context
```

### Configuration Management

Robust configuration handling with environment variable support and validation.

```python
from aixtools.utils.config import Config
from aixtools.utils.config_util import load_config

# Load configuration from environment and files
config = load_config()

# Access configuration values
model_name = config.get("MODEL_NAME", "gpt-4")
api_key = config.get("API_KEY")
timeout = config.get("TIMEOUT", 30, int)

# Configuration with validation
class AppConfig(Config):
    model_name: str = "gpt-4"
    max_tokens: int = 1000
    temperature: float = 0.7

app_config = AppConfig()
```

### File Utilities

Enhanced file operations with Path support and utility functions.

```python
from aixtools.utils.files import read_file, write_file, ensure_directory
from pathlib import Path

# Read and write files with automatic encoding handling
content = read_file("data.txt")
write_file("output.txt", "Hello, world!")

# Ensure directories exist
data_dir = Path("data/logs")
ensure_directory(data_dir)

# Work with file paths
config_path = Path("config") / "settings.json"
if config_path.exists():
    config_data = read_file(config_path)
```

### Chainlit Utilities

Specialized utilities for Chainlit integration and agent display.

```python
from aixtools.utils.chainlit.cl_agent_show import show_agent_response
from aixtools.utils.chainlit.cl_utils import format_message

# Display agent responses in Chainlit
await show_agent_response(
    response="Hello, how can I help you?",
    metadata={"model": "gpt-4", "tokens": 150}
)

# Format messages for Chainlit display
formatted_msg = format_message(
    content="Processing your request...",
    message_type="info"
)
```

### General Utilities

Common utility functions for everyday programming tasks.

```python
from aixtools.utils.utils import safe_json_loads, timestamp_now, hash_string

# Safe JSON parsing
data = safe_json_loads('{"key": "value"}', default={})

# Get current timestamp
now = timestamp_now()

# Generate hash for strings
file_hash = hash_string("content to hash")
```

