Metadata-Version: 2.4
Name: daie
Version: 1.0.1
Summary: A Python library for creating and deploying decentralized AI agents with tools
Author-email: Kanishk Kumar Singh <kanishkkumar2004@gmail.com>
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: nats-py>=2.6.0
Requires-Dist: cryptography>=46.0.0
Requires-Dist: python-dotenv>=1.2.0
Requires-Dist: pydantic>=2.12.0
Requires-Dist: pydantic-settings>=2.12.0
Requires-Dist: requests>=2.31.0
Requires-Dist: rich>=13.0.0
Requires-Dist: typer>=0.12.0
Provides-Extra: dev
Requires-Dist: pytest>=9.0.0; extra == "dev"
Requires-Dist: pytest-asyncio>=1.3.0; extra == "dev"
Requires-Dist: pytest-cov>=7.0.0; extra == "dev"
Requires-Dist: black>=24.0.0; extra == "dev"
Requires-Dist: flake8>=7.0.0; extra == "dev"
Requires-Dist: mypy>=1.8.0; extra == "dev"
Provides-Extra: docs
Requires-Dist: sphinx>=7.0.0; extra == "docs"
Requires-Dist: sphinx-rtd-theme>=2.0.0; extra == "docs"
Requires-Dist: nbsphinx>=0.9.0; extra == "docs"
Dynamic: license-file

# DAIE - Decentralized AI Ecosystem

A lightweight Python library for creating and managing AI agents with tools, featuring decentralized communication and memory management.

## Features

### 🚀 **Core Features**
- **Lightweight Design**: Minimal dependencies, optimized for speed and resource efficiency
- **Agent Management**: Create, configure, and manage AI agents with unique identities
- **Tool System**: Define and register reusable tools for agents to execute
- **Decentralized Communication**: Agents communicate via NATS JetStream
- **Memory Management**: Agent-specific memories with persistence support
- **LLM Integration**: Centralized LLM management with Ollama integration (default: llama3)
- **CLI Interface**: Command-line tools for system management

### 🤖 **Agent Features**
Each agent has:
- **Unique Identity**: ID, name, role, goal, backstory, and system prompt
- **Local Tool Execution**: Agents execute tools locally within their own context
- **Chat History**: Individual memory stores with working, semantic, and episodic memory
- **Vector Database**: Each agent has its own vector database for semantic search (in development)
- **LangGraph Workflow**: Each agent has its own LangGraph workflow (in development)
- **LLM from Core**: Agents fetch LLM instances from the centralized LLM manager

## Installation

### Prerequisites
- Python 3.10+
- Ollama (for LLM functionality)
- NATS JetStream (for communication)

### Install the Library
```bash
pip install daie
```

### Install Ollama
1. Download and install Ollama from [ollama.com](https://ollama.com/download)
2. Pull the default model:
   ```bash
   ollama pull llama3
   ```

## Quick Start

### Example: Creating a Simple Agent
```python
#!/usr/bin/env python3
import asyncio
import logging
from daie import Agent, AgentConfig, Tool, ToolRegistry
from daie.agents import AgentRole
from daie.tools import tool

# Configure logging
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)


async def main():
    logger.info("=== DAIE - Decentralized AI Ecosystem Example ===")
    
    # Create a tool
    @tool(
        name="greeting",
        description="Generate a greeting message",
        category="general",
        version="1.0.0"
    )
    async def greeting_tool(name: str, language: str = "en") -> str:
        greetings = {
            "en": f"Hello, {name}! Welcome to DAIE!",
            "es": f"Hola, {name}! ¡Bienvenido a DAIE!",
            "fr": f"Bonjour, {name}! Bienvenue dans DAIE!",
            "de": f"Hallo, {name}! Willkommen bei DAIE!"
        }
        return greetings.get(language.lower(), greetings["en"])
    
    # Create agent configuration with new features
    config = AgentConfig(
        name="ResearchAgent",
        role=AgentRole.SPECIALIZED,
        goal="Research information on given topics",
        backstory="Created to assist with research and information gathering",
        system_prompt="You are a research assistant that helps users find and analyze information.",
        capabilities=["greeting"]
    )
    
    # Create agent
    agent = Agent(config=config)
    agent.add_tool(greeting_tool)
    
    # Test tool execution
    result = await greeting_tool.execute({"name": "Alice", "language": "es"})
    logger.info(f"✅ Tool executed successfully: {result}")
    
    logger.info("\n🎉 Example completed successfully!")


if __name__ == "__main__":
    try:
        asyncio.run(main())
    except Exception as e:
        logger.error(f"❌ Error: {e}")
        import sys
        sys.exit(1)
```

## CLI Usage

### Agent Management
```bash
# List all agents
daie agent list

# Create a new agent
daie agent create --name "MyAgent" --role "general-purpose" --goal "Help users with questions"

# Start an agent
daie agent start <agent-id>

# Stop an agent
daie agent stop <agent-id>

# Get agent status
daie agent status <agent-id>

# Delete an agent
daie agent delete <agent-id>
```

### Core System Management
```bash
# Initialize the system
daie core init

# Start the central core system
daie core start

# Stop the central core system
daie core stop

# Restart the central core system
daie core restart

# Get system status
daie core status

# View system logs
daie core logs

# Check system health
daie core health
```

## LLM Configuration

### Setting LLM Parameters
```python
from daie import set_llm, get_llm_config, LLMType

# Using Ollama (default)
set_llm(ollama_llm="llama3")
set_llm(ollama_llm="mistral", temperature=0.3, max_tokens=1500)

# Using OpenAI
set_llm(
    llm_type=LLMType.OPENAI,
    model_name="gpt-3.5-turbo",
    api_key="your-api-key",
    temperature=0.5,
    max_tokens=2000
)

# Get current configuration
config = get_llm_config()
print(f"Current LLM: {config.llm_type.value}/{config.model_name}")
print(f"Temperature: {config.temperature}")
print(f"Max tokens: {config.max_tokens}")
```

### Available LLM Models

#### Ollama Models:
- llama3 (default)
- llama3.2:latest
- mistral
- llama2
- gemma

#### OpenAI Models:
- gpt-4o
- gpt-4o-mini
- gpt-4-turbo
- gpt-3.5-turbo

## Configuration

### Environment Variables
```bash
# System configuration
DAIE_LOG_LEVEL=INFO
DAIE_NATS_URL=nats://localhost:4222
DAIE_CENTRAL_CORE_URL=http://localhost:8000

# LLM configuration
DAIE_DEFAULT_LLM_MODEL=llama3
DAIE_LLM_TEMPERATURE=0.7
DAIE_LLM_MAX_TOKENS=1000

# Database configuration
DAIE_DATABASE_URL=sqlite:///:memory:
DAIE_REDIS_URL=redis://localhost:6379/0
```

## Architecture

### System Components
1. **Agent**: Individual AI entity with specific capabilities
2. **Tool**: Reusable functionality that agents can execute
3. **LLM Manager**: Handles LLM integration with various providers
4. **Communication Manager**: Facilitates agent communication via NATS
5. **Memory Manager**: Manages agent memory storage and retrieval
6. **Tool Registry**: Central repository for available tools
7. **Central Core System**: Orchestrator for the entire ecosystem

### Communication Protocol
Agents communicate using NATS JetStream with the following message types:
- **Text Messages**: Direct communication between agents
- **Tasks**: Requests for tool execution
- **Responses**: Results from task execution
- **Events**: System and agent events

## Development

### Prerequisites
- Python 3.10+
- Docker (for running dependencies)
- Poetry (for package management)

### Setup
```bash
# Clone the repository
git clone https://github.com/decentralized-ai/decentralized-ai-ecosystem.git
cd decentralized-ai-ecosystem

# Install dependencies
poetry install

# Run tests
poetry run pytest tests/

# Run the CLI
poetry run daie --help
```

## License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## Support

For questions or support, please contact **KANISHK KUMAR SINGH** at kanishkkumar2004@gmail.com.
