Metadata-Version: 2.4
Name: em-agent-framework
Version: 1.0.0
Summary: Production-ready AI agent framework purpose built for Vertex AI
Author-email: Emergence AI <deepak@emergence.ai>
Maintainer-email: Emergence AI <deepak@emergence.ai>
License-Expression: Apache-2.0
Project-URL: Homepage, https://github.com/emergence-ai/em-agent-framework
Project-URL: Documentation, https://em-agent-framework.readthedocs.io
Project-URL: Repository, https://github.com/emergence-ai/em-agent-framework
Project-URL: Issues, https://github.com/emergence-ai/em-agent-framework/issues
Project-URL: Changelog, https://github.com/emergence-ai/em-agent-framework/blob/main/CHANGELOG.md
Keywords: ai,agent,llm,vertex-ai,gemini,claude,anthropic,multi-agent,tool-calling
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Typing :: Typed
Requires-Python: >=3.8
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: google-genai<2.0.0,>=1.0.0
Requires-Dist: python-dotenv>=1.0.0
Provides-Extra: dev
Requires-Dist: pytest>=7.4.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
Requires-Dist: black>=23.0.0; extra == "dev"
Requires-Dist: isort>=5.12.0; extra == "dev"
Requires-Dist: mypy>=1.0.0; extra == "dev"
Requires-Dist: ruff>=0.1.0; extra == "dev"
Provides-Extra: examples
Requires-Dist: pandas>=2.0.0; extra == "examples"
Provides-Extra: docs
Requires-Dist: sphinx>=5.0.0; extra == "docs"
Requires-Dist: sphinx-rtd-theme>=1.0.0; extra == "docs"
Requires-Dist: sphinx-autodoc-typehints>=1.0.0; extra == "docs"
Provides-Extra: all
Requires-Dist: em-agent-framework[dev,docs,examples]; extra == "all"
Dynamic: license-file

# em-agent-framework

A production-ready AI agent framework with support for Gemini and Anthropic (Claude) models via Vertex AI.

## Features

- **Multi-Model Support**: Seamless integration with Gemini and Anthropic models
- **Automatic Fallback**: Cascading fallback across multiple models on failure
- **Parallel Tool Execution**: Execute independent function calls concurrently
- **Context Injection**: Pass large data/secrets to tools without sending through LLM
- **Group Chat**: Multi-agent conversations with flexible routing strategies
- **Metrics & Observability**: Built-in tracking for performance and usage

## Installation

```bash
pip install em-agent-framework
```

### Requirements
- Python 3.8+
- Google Cloud Project with Vertex AI enabled
- Vertex AI API credentials configured

## Quick Start

```python
import asyncio
from typing import Annotated
from em_agent_framework.core.agent import Agent
from em_agent_framework.config.settings import ModelConfig, AgentConfig

# Define a tool
def get_weather(city: Annotated[str, "City name"]) -> str:
    """Get weather for a city."""
    return f"Weather in {city}: Sunny, 72°F"

async def main():
    # Configure models with fallback
    model_configs = [
        ModelConfig(name="gemini-2.0-flash-exp", provider="gemini"),
        ModelConfig(name="claude-3-5-sonnet-v2@20241022", provider="anthropic"),
    ]

    # Create agent
    agent = Agent(
        name="assistant",
        system_instruction="You are a helpful assistant.",
        tools=[get_weather],
        model_configs=model_configs,
        agent_config=AgentConfig(verbose=True)
    )

    # Send message
    response = await agent.send_message("What's the weather in Tokyo?")
    print(response)

asyncio.run(main())
```

## Key Features

### Model Fallback

Automatically falls back to alternative models if the primary model fails:

```python
model_configs = [
    ModelConfig(name="gemini-2.0-flash-exp", provider="gemini"),  # Try first
    ModelConfig(name="claude-3-5-sonnet-v2@20241022", provider="anthropic"),  # Fallback
]
```

### Parallel Tool Execution

Execute multiple independent function calls concurrently:

```python
agent_config = AgentConfig(
    enable_parallel_tools=True,
    max_parallel_tools=5
)
```

### Context Injection

Pass data to tools without sending it through the LLM (ideal for large datasets, API keys, or user info):

```python
def analyze_data(metric: Annotated[str, "Metric to analyze"], context: dict) -> str:
    """Analyze user data."""
    df = context.get('dataframe')  # Not sent to LLM
    userid = context.get('userid')  # Not sent to LLM
    return f"User {userid}: Analysis complete"

agent = Agent(
    name="analyst",
    tools=[analyze_data],
    context={
        'dataframe': large_df,      # Never sent to LLM
        'userid': 'USER_12345',     # Never sent to LLM
    },
    model_configs=model_configs,
    agent_config=agent_config
)
```

Benefits:
- **Reduce token costs**: Large data stays local
- **Security**: Sensitive data (API keys, credentials) stays private
- **Authentication**: Pass user info for validation

### Dynamic Tool Loading

Load tools on-demand instead of all at once:

```python
# Define tools
def basic_search(query: Annotated[str, "Search query"]) -> str:
    return f"Results for: {query}"

def advanced_analysis(data: Annotated[str, "Data to analyze"]) -> str:
    return f"Analysis of: {data}"

# Create agent with complementary tools
agent = Agent(
    name="assistant",
    tools=[basic_search],  # Loaded immediately
    complementary_tools=[advanced_analysis],  # Available via search_tool
    model_configs=model_configs,
    agent_config=agent_config
)

# Agent can dynamically load tools when needed
# Just mention the tool name and the agent will use search_tool to find and load it
response = await agent.send_message("Use advanced_analysis to analyze this data")
```

### Dynamic Instructions

Load specialized instructions on-demand:

```python
# Create instructions.json
{
    "instructions": [
        {
            "id": "code_review",
            "description": "Code review guidelines",
            "instruction": "Review code for: correctness, efficiency, security, readability"
        },
        {
            "id": "api_design",
            "description": "API design principles",
            "instruction": "Design RESTful APIs following best practices"
        }
    ]
}

# Create agent with instructions file
agent = Agent(
    name="assistant",
    system_instruction="You are a code assistant.",
    instructions_file="instructions.json",
    model_configs=model_configs,
    agent_config=agent_config
)

# Agent can load instructions dynamically
response = await agent.send_message("Load code_review instructions and review this code: ...")
```

### Multi-Agent Group Chat

```python
from em_agent_framework.core.group_chat.manager import GroupChatManager

# Create specialized agents
researcher = Agent(name="researcher", system_instruction="Research expert", ...)
developer = Agent(name="developer", system_instruction="Python developer", ...)

# Create group chat with skill-based routing
manager = GroupChatManager(
    agents=[researcher, developer],
    strategy="skill_based",  # Routes based on agent descriptions
    max_total_turns=10
)

# Start conversation
await manager.initiate_conversation(
    query="Research and build a JSON parser"
)
```

## Configuration

### Agent Configuration

```python
AgentConfig(
    max_turns=100,              # Max conversation turns
    max_retries_per_model=3,    # Retries before fallback
    verbose=True,               # Print debug logs
    enable_parallel_tools=True, # Enable parallel execution
    max_parallel_tools=5        # Max concurrent tools
)
```

### Model Configuration

```python
ModelConfig(
    name="gemini-2.0-flash-exp",
    provider="gemini",          # "gemini" or "anthropic"
    temperature=0.1,
    max_output_tokens=8192,
    timeout=10.0               # Request timeout (seconds)
)
```

## Testing

```bash
# Run all tests
pytest tests/ -v

# Run specific test
pytest tests/test_agent_basic.py -v
```

## Development

```bash
git clone https://github.com/emergence-ai/em-agent-framework
cd em-agent-framework
pip install -r requirements.txt
```

## License

Apache-2.0 License - see LICENSE file for details
