Metadata-Version: 2.1
Name: agentsflowcompiler-lib
Version: 0.2.2
Summary: Modular AI agent framework — build, configure, and run LLM agents
Author: Shahar Fadlon
License: MIT License
        
        Copyright (c) 2024 Shahar Fadlon
        
        Permission is hereby granted, free of charge, to any person obtaining a copy
        of this software and associated documentation files (the "Software"), to deal
        in the Software without restriction, including without limitation the rights
        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
        copies of the Software, and to permit persons to whom the Software is
        furnished to do so, subject to the following conditions:
        
        The above copyright notice and this permission notice shall be included in all
        copies or substantial portions of the Software.
        
        THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
        SOFTWARE.
        
Project-URL: Homepage, https://github.com/shaharfadlon/AgentFlow
Project-URL: Repository, https://github.com/shaharfadlon/AgentFlow
Project-URL: Bug Tracker, https://github.com/shaharfadlon/AgentFlow/issues
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Typing :: Typed
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: pydantic<3.0,>=2.0
Requires-Dist: pyyaml<7.0,>=6.0
Requires-Dist: tiktoken<0.8,>=0.5.0
Requires-Dist: tenacity<9.0,>=8.0.0
Requires-Dist: filelock<4.0,>=3.0.0
Requires-Dist: python-dotenv<2.0.0,>=1.0.0
Provides-Extra: ai21
Requires-Dist: ai21; extra == "ai21"
Provides-Extra: all
Requires-Dist: openai<2.0,>=1.0; extra == "all"
Requires-Dist: anthropic>=0.30; extra == "all"
Requires-Dist: google-genai>=1.0; extra == "all"
Requires-Dist: boto3; extra == "all"
Requires-Dist: mistralai; extra == "all"
Requires-Dist: cohere; extra == "all"
Requires-Dist: ai21; extra == "all"
Provides-Extra: amazon
Requires-Dist: boto3; extra == "amazon"
Provides-Extra: anthropic
Requires-Dist: anthropic>=0.30; extra == "anthropic"
Provides-Extra: azure
Requires-Dist: openai<2.0,>=1.0; extra == "azure"
Provides-Extra: cohere
Requires-Dist: cohere; extra == "cohere"
Provides-Extra: deepseek
Requires-Dist: openai<2.0,>=1.0; extra == "deepseek"
Provides-Extra: dev
Requires-Dist: pytest<9.0,>=8.0; extra == "dev"
Requires-Dist: pytest-mock<4.0,>=3.0; extra == "dev"
Requires-Dist: pytest-cov<6.0,>=5.0; extra == "dev"
Requires-Dist: bandit<2.0,>=1.7; extra == "dev"
Requires-Dist: pip-audit<3.0,>=2.0; extra == "dev"
Provides-Extra: google
Requires-Dist: google-genai>=1.0; extra == "google"
Provides-Extra: meta
Requires-Dist: openai<2.0,>=1.0; extra == "meta"
Provides-Extra: mistral
Requires-Dist: mistralai; extra == "mistral"
Provides-Extra: ollama
Requires-Dist: openai<2.0,>=1.0; extra == "ollama"
Provides-Extra: openai
Requires-Dist: openai<2.0,>=1.0; extra == "openai"
Provides-Extra: perplexity
Requires-Dist: openai<2.0,>=1.0; extra == "perplexity"
Provides-Extra: xai
Requires-Dist: openai<2.0,>=1.0; extra == "xai"

# AgentsFlowCompiler

A modular Python framework for building, configuring, and running LLM-powered AI agents. Define agents in YAML, equip them with tools, and run them with a single function call.

---

## Installation

```bash
# Core (production runtime only)
pip install AgentsFlowCompiler-lib

# With dev tools (agent management, monitoring, CRUD API)
pip install AgentsFlowCompiler-lib[dev]

# With specific LLM providers
pip install AgentsFlowCompiler-lib[openai]
pip install AgentsFlowCompiler-lib[anthropic]
pip install AgentsFlowCompiler-lib[google]
pip install AgentsFlowCompiler-lib[ollama]

# Everything
pip install AgentsFlowCompiler-lib[all]
```

---

## Quick Start

### Production — Load & Run

```python
from agentsflow import load_agents, AgentsFlowConfig

# Load all agents from a directory
agents = load_agents("/path/to/my_project")

# Run an agent
result = agents["analyzer"].run("What is the GDP of France?")
print(result.output)  # The answer
print(result.token_input, result.token_output)  # Token metadata

# Agent metadata properties
print(agent.name)      # "analyzer"
print(agent.model)     # "gpt-4o"
print(agent.provider)  # "openai"

# With a .env file for API keys
agents = load_agents("/path/to/my_project", env_path="/path/to/.env")

# With SDK config (log level, network silencing, structured logging)
config = AgentsFlowConfig(log_level="INFO", silence_network_loggers=True, log_format="json")
agents = load_agents("/path/to/my_project", config=config)
```

### Development — Create & Manage

```python
from agentsflow.dev import create_project, create_agent, add_tool
from agentsflow import AgentModelConfig, Prompt, ToolIdentityConfig, ToolConfig

# Create a project
create_project("My AI Project", dev_path="/path/to/dev", prod_path="/path/to/prod")

# Create an agent
create_agent(
    base_dir="/path/to/dev",
    agent_name="researcher",
    model_config=AgentModelConfig(model="gpt-4o", temperature=0.3),
    description="Research assistant that finds and summarizes information",
    prompts=Prompt(
        instruction="You are a research assistant. Be thorough and cite sources.",
        think="Break complex questions into sub-questions before answering.",
    ),
)

# Add a tool (script stored at tools/custom_tools/web_search/tool.py)
add_tool(
    base_dir="/path/to/dev",
    script="def search(query: str, max_results: int = 5):\n    return []",
    identity=ToolIdentityConfig(
        name="web_search",
        description="Search the web for current information",
        category="search",
    ),
    config=ToolConfig(
        function_name="search",
        parameters={
            "query": {"type": "string", "description": "Search query", "required": True},
            "max_results": {"type": "number", "description": "Max results", "required": False, "default": 5},
        },
        returns={"type": "array", "description": "Search results"},
    ),
    agent_name="researcher",
)
```

---

## Core Concepts

### What is an Agent?

An agent is an LLM-powered unit that:

1. **Receives** a user prompt
2. **Optionally preprocesses** the input (custom Python function)
3. **Sends** a system prompt + user message to an LLM
4. **Can use tools** — the LLM decides when to call them, executes them, and feeds results back
5. **Optionally postprocesses** the output (custom Python function)
6. **Returns** a `RunResult` object containing the output and metadata

```
User Input → [Preprocess] → LLM ⇄ Tools → [Postprocess] → Final Output
```

### Project Structure

A typical project looks like this:

```
my_project/
├── MyProject.afproj                # Optional project metadata & env config
├── config/
│   └── agents.yaml                 # Agent manifest (lists all agents)
├── agents/
│   ├── researcher/
│   │   ├── config.yaml             # Agent configuration
│   │   ├── instruction.md          # System instruction prompt
│   │   ├── think.md                # Thinking guidelines (optional)
│   │   ├── return.md               # Output format instructions (optional)
│   │   ├── example.md              # Few-shot examples (optional)
│   │   ├── custom_tools.py         # Custom tool functions (optional)
│   │   └── logs/
│   │       ├── run_history/        # Run logs
│   │       ├── audit_logs/         # Audit trail
│   │       ├── prompt_history/     # Prompt history
│   │       └── token_usage/        # Token usage logs
│   └── writer/
│       ├── config.yaml
│       ├── instruction.md
│       └── logs/
└── tools/                          # Shared built-in tools
    ├── calculator/
    │   ├── tool.yaml
    │   └── tool.py
    └── web_search/
        ├── tool.yaml
        └── tool.py
```

### Agent Configuration (YAML)

Each agent is defined by a `config.yaml` file:

```yaml
researcher:
  # Identity
  agent_id: "node_001"
  description: "Research assistant"

  # Model
  model: gpt-4o                    # or claude-sonnet-4-20250514, gemini-pro, llama3, etc.
  provider: openai                 # auto-detected if not set
  temperature: 0.3
  max_tokens: 4096

  # Prompts (relative paths to .md files)
  instruction_path: instruction.md
  think_path: think.md             # optional: thinking guidelines
  return_path: return.md           # optional: output format rules
  example_path: example.md         # optional: few-shot examples

  # Pre/Post Processing (optional)
  preprocess_path: preprocess.py
  preprocess_function_name: preprocess
  postprocess_path: postprocess.py
  postprocess_function_name: postprocess

  # Output Format
  return_format: text              # text | json | json_object | markdown
  json_schema_path: schema.json   # optional: for structured JSON output

  # Tools
  tools:
    - name: calculator
      custom: false
    - name: company_lookup
      custom: true
      description: "Look up company info"
      path: custom_tools.py
      function_name: lookup
      parameters:
        query:
          type: string
          description: "Company name or ticker"
          required: true
```

---

## System Prompt Assembly

The agent's system prompt is assembled from multiple files in this order:

```
┌─────────────────┐
│  instruction.md  │  ← Main system instruction (required)
├─────────────────┤
│  think.md        │  ← How the agent should reason (optional)
├─────────────────┤
│  return.md       │  ← Output format guidelines (optional)
├─────────────────┤
│  example.md      │  ← Few-shot examples (optional)
└─────────────────┘
        ↓
  Combined System Prompt → sent to LLM
```

This modular approach lets you reuse and swap prompt sections independently.

---

## Tools

Tools give agents the ability to perform actions — search the web, calculate math, call APIs, read files, and anything else you can write in Python.

### Built-in Tools

Built-in tools are shared across all agents. Each is a folder inside `tools/`:

| Tool | Category | Description |
|------|----------|-------------|
| `calculator` | math | Evaluate math expressions safely (sqrt, log, sin, +, -, etc.) |
| `web_search` | search | Search the web for current information |

#### Using a built-in tool:

```yaml
tools:
  - name: calculator
    custom: false
```

### Custom Tools

Custom tools are Python functions specific to an agent.

**Step 1:** Write the function:

```python
# agents/researcher/custom_tools.py
def lookup_company(query: str) -> dict:
    """Look up company information."""
    # your logic here
    return {"name": "Apple", "sector": "Technology", "market_cap": "3.4T"}
```

**Step 2:** Define in YAML:

```yaml
tools:
  - name: company_lookup
    custom: true
    description: "Look up company information by name or ticker"
    category: finance
    path: custom_tools.py
    function_name: lookup_company
    parameters:
      query:
        type: string
        description: "Company name or stock ticker"
        required: true
    returns:
      type: object
      description: "Company info with name, sector, market_cap"
```

### Tool Parameter Fields

| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `type` | string | ✅ | `string`, `number`, `boolean`, `array`, `object` |
| `description` | string | ✅ | What this parameter does (the LLM reads this) |
| `required` | boolean | ❌ | Default: `false` |
| `default` | any | ❌ | Default value if not provided |
| `enum` | array | ❌ | List of allowed values |

### How Tool Calling Works at Runtime

```
1. Agent loads         → ToolRegistry reads YAML, imports Python functions
2. Agent.run() called  → LLM receives tool schemas in API request
3. LLM wants a tool    → Returns tool_calls: [{name, arguments}]
4. Agent executes      → ToolRegistry.execute(name, args) → runs Python function
5. Result sent back    → Added to messages as role: "tool"
6. LLM sees result     → Calls another tool or returns final answer
7. Loop limit          → Max 10 rounds (prevents infinite loops)
```

### Docker execution (optional)

Tools can run **inside a container** when `docker_enabled` is set on the tool and `docker_config.yaml` exists beside the tool. These are **public** DEV APIs (also exported from `agentsflow.dev`):

| Function | Role |
|----------|------|
| `get_tool_docker_config` | Read and parse `docker_config.yaml` → `DockerToolConfig` |
| `set_tool_docker_config` | Write `docker_config.yaml`, set `docker_enabled=True` |
| `remove_tool_docker_config` | Delete the YAML, set `docker_enabled=False` |

```python
from pathlib import Path
from agentsflow.dev import (
    get_tool_docker_config,
    set_tool_docker_config,
    remove_tool_docker_config,
)
from agentsflow.schema.docker_tool_config_schema import DockerToolConfig

base = Path("/path/to/DEV")

# Read current Docker settings (requires docker_enabled=True on the tool)
cfg = get_tool_docker_config(base, agent_name="my_agent", tool_name="my_tool")

# Create or overwrite docker_config.yaml
set_tool_docker_config(
    base,
    docker_config=DockerToolConfig(image="python:3.11-slim"),
    agent_name="my_agent",
    tool_name="my_tool",
)

# Turn off Docker execution for this tool
remove_tool_docker_config(base, agent_name="my_agent", tool_name="my_tool")
```

| Mode | `mount_host_tool_script` | Behavior |
|------|--------------------------|----------|
| **Development (default)** | `true` | Bind-mounts the agent’s `tool.py` from disk into the container at `container_tool_path`. |
| **Production** | `false` | No host mount; the **image** must already include the script at `container_tool_path` (e.g. `COPY` in a Dockerfile). Arguments are still passed via `TOOL_ARGS`. |

**Full guide (Hebrew):** [GUIDE_DOCKER_TOOLS.md](../README/GUIDE_DOCKER_TOOLS.md) — paths, APIs, `DockerToolConfig` fields, runtime, logging.

Other useful fields in `DockerToolConfig`: `image`, `container_tool_path` (default `/workspace/tool.py`), `tool_interpreter` (default `python`). Full detail: [helper/API_REFERENCE.md](helper/API_REFERENCE.md); extended Docker section (examples + tables): [README/API_REFERENCE.md](../README/API_REFERENCE.md#dev-api--docker-tool-config) (repo root).

---

## Pre/Post Processing

### Preprocess

A Python function that transforms user input **before** it reaches the LLM:

```python
# preprocess.py
def preprocess(user_input: str) -> str:
    """Add context, clean input, augment with RAG results, etc."""
    context = fetch_relevant_docs(user_input)
    return f"Context:\n{context}\n\nQuestion: {user_input}"
```

### Postprocess

A Python function that transforms LLM output **after** the response:

```python
# postprocess.py
def postprocess(llm_output):
    """Parse, validate, save to DB, trigger notifications, etc."""
    data = json.loads(llm_output)
    save_to_database(data)
    return data
```

---

## LLM Providers

The framework auto-detects the provider based on model name. You can also set it explicitly via `provider` in the config.

| Provider | Models | API Key Env Var |
|----------|--------|-----------------|
| **OpenAI** | gpt-4, gpt-4o, gpt-4o-mini, o1, o3 | `OPENAI_API_KEY` |
| **Anthropic** | claude-sonnet-4-20250514, claude-3-haiku, claude-3-opus | `ANTHROPIC_API_KEY` |
| **Google** | gemini-pro, gemini-1.5-flash, gemini-2.0 | `GOOGLE_API_KEY` |
| **Ollama** | llama3, mistral, phi, qwen, deepseek, codellama | No key needed (local) |

---

## API Reference

See [helper/API_REFERENCE.md](helper/API_REFERENCE.md) for the complete reference. Summary:

### PROD API

```python
from agentsflow import load_agents, AgentsFlowConfig
```

| Function | Description |
|----------|-------------|
| `load_agents(agents_dir, env_path=None, config=None)` | Load all agents → `dict[str, Agent]`. `config`: optional `AgentsFlowConfig` |
| `AgentsFlowConfig` | SDK-wide config: `log_level`, `silence_network_loggers`, `log_format` |

### DEV API

Available with `pip install AgentsFlowCompiler-lib[dev]`. Project metadata lives in an optional `.afproj` file; agent/tool/processing/monitoring APIs operate directly on the DEV `base_dir`.

Key data classes (**AgentConfig**, **RunResult**, **Prompt**, **Tool**, etc.), monitoring types (**AuditLogEntry**, **RunHistoryEntry**), and all errors can be imported directly from the top-level package (including **`DockerExecutionError`** for Docker tool failures — subclass of **`AgentsFlowToolError`**):
```python
from agentsflow import (
    AgentConfig, RunResult, Prompt, Tool,
    AuditLogEntry, RunHistoryEntry,
    AgentsFlowToolError, DockerExecutionError,
    LLMRateLimitError, MaxToolRoundsError,
)
```

#### Project

| Function | Description |
|----------|-------------|
| `create_project(project_name, dev_path, prod_path, ...)` | Create optional `.afproj` metadata file |
| `edit_project(project_config_path, **updates)` | Edit project fields |
| `get_project(project_config_path)` | Read project config |

#### Agent

| Function | Description |
|----------|-------------|
| `create_agent(base_dir, agent_name, model_config, ...)` | Create agent directory + config + prompts |
| `edit_agent(base_dir, agent_id/name, ...)` | Edit config fields (pass only what changes) |
| `delete_agent(base_dir, agent_id/name)` | Delete agent + remove from manifest |
| `duplicate_agent(base_dir, new_name, source_name/id)` | Deep copy with new name |
| `validate_agent(base_dir, agent_id/name)` | Raise `AgentsFlowConfigError` on invalid config |
| `get_agent_config(base_dir, agent_id/name)` | Get full `AgentConfig` |
| `get_all_agents(base_dir)` | List all registered agents |
| `get_agent_prompts(base_dir, agent_id/name)` | Retrieve full `Prompt` object from disk |

#### Tools

| Function | Description |
|----------|-------------|
| `add_tool(base_dir, script, identity, config, agent_name/id)` | Add custom tool (`ToolIdentityConfig` + `ToolConfig`) |
| `edit_tool(base_dir, tool_name/id, agent_name/id, ...)` | Edit tool fields |
| `remove_tool(base_dir, tool_name/id, agent_name/id)` | Remove tool from agent |
| `get_custom_tools(base_dir, agent_name/id)` | List custom tools on agent |
| `get_agent_builtin_tools(base_dir, agent_name/id)` | List built-in tools used by agent |
| `get_all_builtin_tools(tools_dir)` | List all available built-in tools |
| `get_full_script_tool(base_dir, agent_name/id, tool_name/id)` | Read tool Python script from disk |
| `get_tool_docker_config(...)` | Read `DockerToolConfig` from `docker_config.yaml` (`docker_enabled` required) |
| `set_tool_docker_config(..., docker_config)` | Write `docker_config.yaml`, set `docker_enabled=True` |
| `remove_tool_docker_config(...)` | Delete Docker YAML, set `docker_enabled=False` |

See **Docker execution (optional)** under [Tools](#tools) for `mount_host_tool_script` (dev vs production).

#### Processing

| Function | Description |
|----------|-------------|
| `add_preprocess(base_dir, agent_name/id)` | Add preprocess (creates default script) |
| `edit_preprocess(base_dir, agent_name/id, script)` | Replace preprocess script |
| `remove_preprocess(base_dir, agent_name/id)` | Remove preprocess |
| `add_postprocess(base_dir, agent_name/id)` | Add postprocess (creates default script) |
| `edit_postprocess(base_dir, agent_name/id, script)` | Replace postprocess script |
| `remove_postprocess(base_dir, agent_name/id)` | Remove postprocess |
| `get_preprocess_script(base_dir, agent_name/id)` | Read preprocess script from disk |
| `get_postprocess_script(base_dir, agent_name/id)` | Read postprocess script from disk |

#### Monitoring

| Function | Description |
|----------|-------------|
| `get_prompt_history(base_dir, agent_name)` | History of system prompt changes |
| `get_prompt_from_hash(base_dir, hash, agent_name/id)` | Read stored prompt by hash |
| `get_run_history(base_dir, agent_name, from_date, to_date)` | I/O logs with date filtering |
| `get_run_details(base_dir, rid, agent_name/id)` | Single run by run ID |
| `get_token_usage(base_dir, agent_name, from_date, to_date)` | Token stats aggregate |
| `get_audit_logs(base_dir, agent_name/id)` | Audit log entries |
| `get_audit_log_from_timestamp(base_dir, timestamp, agent_name/id)` | Single audit entry by timestamp |

---

## Architecture

```
AgentsFlowCompiler-lib
├── agentsflow/             Python package (import name)
│   ├── __init__.py         PROD entry: load_agents()
│   ├── _prod.py            Production loader + .env support
│   │
│   ├── agent/              Core agent runtime
│   │   ├── agent.py            Agent class (run loop)
│   │   ├── config.py           Path resolution
│   │   ├── prompts.py          Prompt assembly & pre/post process
│   │   ├── tools.py            Tool execution wrapper
│   │   ├── stats.py            Logging & token tracking
│   │   └── _utils.py           Shared helpers
│   │
│   ├── llm/                LLM provider abstraction
│   │   ├── base.py             LLMClient abstract interface
│   │   ├── openai_client.py    OpenAI implementation
│   │   ├── anthropic_client.py Anthropic implementation
│   │   ├── google_client.py    Google Gemini implementation
│   │   ├── ollama_client.py    Ollama (local) implementation
│   │   └── factory.py          Auto-detect & create client
│   │
│   ├── schema/             Pydantic data models
│   │   ├── agent_config_schema.py   AgentConfig
│   │   ├── tool_config_schema.py    ToolConfig
│   │   └── tool_schema.py          ToolParameterConfig
│   │
│   ├── tools/              Tool registry
│   │   └── registry.py         Load, register, execute tools
│   │
│   ├── builder/            Agent construction
│   │   └── agents_builder.py   YAML manifest → Agent instances
│   │
│   └── dev/                DEV API (25 functions)
│       ├── project_api.py
│       ├── agent_api.py
│       ├── tool_api.py
│       ├── processing_api.py
│       └── monitoring_api.py
│
└── tests/                  101 tests
    ├── micro/                  Unit tests (99)
    │   ├── agent/
    │   └── api/
    └── macro/                  Integration tests (2)
```

### Design Principles (SOLID)

| Principle | How It's Applied |
|-----------|-----------------|
| **Single Responsibility** | Each file does one thing (one LLM provider per file, one schema per file) |
| **Open/Closed** | Add a new LLM provider = new file + 2 lines in `factory.py`, no existing code changes |
| **Liskov Substitution** | All providers implement `LLMClient` ABC and are fully interchangeable |
| **Interface Segregation** | `LLMClient` has a minimal interface: `chat()` + `provider_name` |
| **Dependency Inversion** | `Agent` depends on the `LLMClient` abstraction, never on concrete providers |

---

## Testing

```bash
# All tests (101)
pytest tests/

# Unit tests only
pytest tests/micro/

# Integration tests only
pytest tests/macro/

# Specific module
pytest tests/micro/agent/
pytest tests/micro/api/
```

---

## Full Example

```python
from agentsflow.dev import (
    create_project, create_agent, add_tool,
    add_preprocess, edit_preprocess, validate_agent, get_all_agents,
)
# 1. Set up project
create_project("Financial Analysis", dev_path="/home/user/fin_project", dev_env_strategy="project_local")

# 2. Create agent
from agentsflow import (
    AgentModelConfig, Prompt, 
    ToolIdentityConfig, ToolConfig
)

create_agent(
    base_dir="/home/user/fin_project",
    agent_name="analyst",
    model_config=AgentModelConfig(model="gpt-4o", temperature=0.2),
    description="Financial research and analysis agent",
    prompts=Prompt(
        instruction="""You are a financial analyst.
    Use available tools to research companies and calculate metrics.
    Always show your reasoning.""",
        think="Break analysis into: data gathering → calculation → conclusion",
    ),
)

# 3. Add tool (script stored at tools/custom_tools/<name>/tool.py)
add_tool(
    base_dir="/home/user/fin_project",
    script="def get_stock_data(ticker: str):\n    return {}",
    identity=ToolIdentityConfig(name="stock_lookup", description="Get stock price and metrics", category="finance"),
    config=ToolConfig(
        function_name="get_stock_data",
        parameters={"ticker": {"type": "string", "description": "Stock ticker (AAPL, MSFT)", "required": True}},
        returns={"type": "object", "description": "Stock data"},
    ),
    agent_name="analyst",
)

# 4. Add preprocess (creates default script), then edit its content
add_preprocess("/home/user/fin_project", agent_name="analyst")
edit_preprocess("/home/user/fin_project", agent_name="analyst", script="def preprocess(prompt: str):\n    return prompt.strip()")

# 5. Validate
validate_agent("/home/user/fin_project", agent_name="analyst")

# 6. List all agents
agents = get_all_agents("/home/user/fin_project")
# [AgentConfig(...)]
```

Then in production:

```python
from agentsflow import load_agents

agents = load_agents("/home/user/fin_project", env_path="/home/user/.env")
result = agents["analyst"].run("Analyze Apple's Q4 earnings and compare to Microsoft")
print(result.output)
```
