Metadata-Version: 2.4
Name: fluxibly
Version: 1.0.0
Summary: Modular Agentic Framework — LLM + Agent + Tools with MCP support
Project-URL: Homepage, https://github.com/Lavaflux/fluxibly
Project-URL: Documentation, https://github.com/Lavaflux/fluxibly#readme
Project-URL: Repository, https://github.com/Lavaflux/fluxibly
Project-URL: Issues, https://github.com/Lavaflux/fluxibly/issues
Author-email: Lavaflux <contact@lavaflux.com>
License: MIT
Keywords: agent,ai,anthropic,automation,gemini,llm,mcp,openai
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.11
Requires-Dist: anyio>=4.0.0
Requires-Dist: loguru>=0.7.3
Requires-Dist: mcp>=1.0.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: python-dotenv>=1.0.0
Requires-Dist: pyyaml>=6.0
Provides-Extra: all
Requires-Dist: anthropic>=0.30.0; extra == 'all'
Requires-Dist: asyncpg>=0.29.0; extra == 'all'
Requires-Dist: fastapi>=0.110.0; extra == 'all'
Requires-Dist: google-generativeai>=0.7.0; extra == 'all'
Requires-Dist: jinja2>=3.1.0; extra == 'all'
Requires-Dist: langchain-anthropic>=1.2.0; extra == 'all'
Requires-Dist: langchain-core>=0.3.0; extra == 'all'
Requires-Dist: langchain-google-genai>=4.0.0; extra == 'all'
Requires-Dist: langchain-openai>=0.2.0; extra == 'all'
Requires-Dist: langchain>=0.3.0; extra == 'all'
Requires-Dist: litellm>=1.80.9; extra == 'all'
Requires-Dist: openai>=1.30.0; extra == 'all'
Requires-Dist: python-multipart>=0.0.6; extra == 'all'
Requires-Dist: uvicorn[standard]>=0.27.0; extra == 'all'
Provides-Extra: anthropic
Requires-Dist: anthropic>=0.30.0; extra == 'anthropic'
Provides-Extra: dev
Requires-Dist: build>=1.4.0; extra == 'dev'
Requires-Dist: pyright>=1.1.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.23.0; extra == 'dev'
Requires-Dist: pytest-cov>=4.1.0; extra == 'dev'
Requires-Dist: pytest-mock>=3.12.0; extra == 'dev'
Requires-Dist: pytest>=8.0.0; extra == 'dev'
Requires-Dist: ruff>=0.8.0; extra == 'dev'
Requires-Dist: twine>=6.2.0; extra == 'dev'
Provides-Extra: gemini
Requires-Dist: google-generativeai>=0.7.0; extra == 'gemini'
Provides-Extra: langchain
Requires-Dist: langchain-anthropic>=1.2.0; extra == 'langchain'
Requires-Dist: langchain-core>=0.3.0; extra == 'langchain'
Requires-Dist: langchain-google-genai>=4.0.0; extra == 'langchain'
Requires-Dist: langchain-openai>=0.2.0; extra == 'langchain'
Requires-Dist: langchain>=0.3.0; extra == 'langchain'
Provides-Extra: litellm
Requires-Dist: litellm>=1.80.9; extra == 'litellm'
Provides-Extra: mcp-servers
Requires-Dist: opencv-python>=4.9.0; extra == 'mcp-servers'
Requires-Dist: openpyxl>=3.1.0; extra == 'mcp-servers'
Requires-Dist: pandas>=2.2.0; extra == 'mcp-servers'
Requires-Dist: pdf2image>=1.17.0; extra == 'mcp-servers'
Requires-Dist: pillow>=10.0.0; extra == 'mcp-servers'
Requires-Dist: pytesseract>=0.3.10; extra == 'mcp-servers'
Requires-Dist: qdrant-client>=1.16.2; extra == 'mcp-servers'
Requires-Dist: rich>=13.0.0; extra == 'mcp-servers'
Requires-Dist: sentence-transformers>=5.2.0; extra == 'mcp-servers'
Provides-Extra: monitoring
Requires-Dist: asyncpg>=0.29.0; extra == 'monitoring'
Requires-Dist: fastapi>=0.110.0; extra == 'monitoring'
Requires-Dist: jinja2>=3.1.0; extra == 'monitoring'
Requires-Dist: python-multipart>=0.0.6; extra == 'monitoring'
Requires-Dist: uvicorn[standard]>=0.27.0; extra == 'monitoring'
Provides-Extra: openai
Requires-Dist: openai>=1.30.0; extra == 'openai'
Description-Content-Type: text/markdown

# Fluxibly

**Modular Agentic Framework — LLM + Agent + Tools**

[![Python 3.11+](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

Fluxibly is a framework for building **any** Agent pipeline simply. Provide general, easy-to-plug, mix-and-match components — to build a new Agent, you only tweak a few core pieces.

## Architecture

```
┌─────────────────────────────────────────────────────────────────────┐
│                        USER / APPLICATION                           │
│                                                                     │
│   from fluxibly import Runner                                       │
│   result = await Runner.run("config/agent.yaml", messages, context) │
│                                                                     │
└──────────────────────────────┬──────────────────────────────────────┘
                               │
┌──────────────────────────────▼──────────────────────────────────────┐
│                       RUNNER (Execution Loop)                       │
│  Manages: handoff chain, max_turns, agent swapping, session sandbox │
│  run() → agent.forward() → if handoff → swap agent → repeat         │
├─────────────────────────────────────────────────────────────────────┤
│                         AGENT LAYER                                 │
│                                                                     │
│  BaseAgent (abstract) → SimpleAgent, CustomAgent, ...               │
│  Manages: LLMs, tools, prompts, delegation (handoffs + agents)      │
│  forward() = prepare → LLM → tool loop → return                     │
│                                                                     │
│  Delegation:                                                        │
│   Handoffs ────── transfer_to_xxx → one-way control transfer        │
│   Agents-as-tools  agent_xxx → delegate & return                    │
│   Skills ────── load_skill → on-demand instruction loading          │
├─────────────────────────────────────────────────────────────────────┤
│                          LLM LAYER                                  │
│                                                                     │
│  BaseLLM (abstract) → OpenAILLM, AnthropicLLM, GeminiLLM            │
│                      → LangChainLLM, LiteLLM                        │
│  prepare() → format for provider                                    │
│  forward() → single inference, standardized output                  │
├─────────────────────────────────────────────────────────────────────┤
│                        TOOLS LAYER                                  │
│                                                                     │
│  Functions │ MCP │ Web Search │ File Search │ Shell │ Computer Use  │
└─────────────────────────────────────────────────────────────────────┘
```

## Installation

```bash
pip install fluxibly
```

With provider extras:

```bash
pip install fluxibly[openai]          # OpenAI only
pip install fluxibly[anthropic]       # Anthropic only
pip install fluxibly[gemini]          # Google Gemini only
pip install fluxibly[all]             # All providers
```

## Usage

### Run an Agent

The simplest way — pass a config path to `Runner.run()`. It loads the config, resolves the agent class, and runs.

```python
import asyncio
from fluxibly import Runner

async def main():
    result = await Runner.run(
        "agents/my_agent.yaml",
        [{"role": "user", "content": "Hello!"}],
    )
    print(result.response.content.output.output_text)

asyncio.run(main())
```

`Runner.run()` accepts:

| Input | What happens |
| --- | --- |
| `str` (path/name) | Loads config YAML → resolves `agent_class` → instantiates → runs |
| `AgentConfig` | Resolves `agent_class` → instantiates → runs |
| `BaseAgent` instance | Uses directly → runs |

### Agent Config (YAML)

Every agent is a YAML file:

```yaml
# agents/my_agent.yaml
name: "my_agent"
description: "A helpful general assistant"
# agent_class: "SimpleAgent"          # Default — or "path.to.module::CustomAgent"

llm:
  model: "gpt-4o"
  temperature: 0.7

system_prompt: "You are a helpful assistant."

tools:
  - "lookup_faq"                      # Tool name → resolved from tools/ directory
  - "./tools/web_search.yaml"         # Tool path → loaded directly

handoffs:
  - "billing_specialist"              # Agent name → one-way transfer
  - agent: "refund_specialist"        # With custom options
    input_filter: "remove_tools"

agents:
  - "researcher"                      # Agent name → delegate & return

skills:
  - "csv-insights"                    # Skill name → on-demand loading

sandbox:                              # Optional: sandboxed shell execution config
  use_uv: true                        # Create a uv venv in the sandbox
  packages: ["pandas"]                # Pre-install packages in the venv
  timeout_seconds: 300                # Command timeout (default: 300)

max_tool_iterations: 10
```

### Create a New Tool

Each tool has two files — a **YAML schema** and an optional **Python handler**:

```text
tools/get_weather/
├── get_weather.yaml     # Required: schema (what the LLM sees)
├── get_weather.py       # Optional: handler (what runs when called)
└── requirements.txt     # Optional: Python dependencies (auto-installed)
```

```yaml
# tools/get_weather/get_weather.yaml — schema
type: "function"
function:
  name: "get_weather"
  description: "Get current weather for a city"
  parameters:
    type: "object"
    properties:
      city: { type: "string", description: "City name" }
    required: ["city"]
```

```python
# tools/get_weather/get_weather.py — handler
async def handler(city: str) -> str:
    return f"Weather for {city}: sunny, 72°F"
```

If a `requirements.txt` sits alongside the tool YAML, dependencies are auto-installed into the sandbox venv on first use. The file follows the standard [uv/pip requirements format](https://docs.astral.sh/uv/pip/packages/#installing-from-a-requirements-file):

```text
# tools/get_weather/requirements.txt
requests>=2.31
```

The handler `.py` file is auto-discovered by name (same stem as the `.yaml`). If a tool needs runtime context (e.g., a sandbox session), use the factory pattern:

```python
# tools/file_stats.py — factory handler (receives sandbox session)
async def create_handler(*, session=None, **kwargs):
    async def file_stats(file_path: str) -> str:
        r = await session.run(f"wc {file_path}")
        return r.stdout or r.stderr
    return file_stats
```

Reference tools from an agent config:

```yaml
# agents/weather_agent.yaml
name: "weather_agent"
llm:
  model: "gpt-4o-mini"
system_prompt: "You help with weather queries."
tools:
  - "get_weather"
```

The agent auto-discovers the YAML schema, the `.py` handler, and `requirements.txt` — no manual wiring needed.

### Create a New Agent (for delegation)

Each agent is its own YAML config. Reference it as a handoff or agent-as-tool from another agent:

```yaml
# agents/researcher.yaml
name: "researcher"
description: "Do deep web research on a topic. Returns a summary."
llm:
  model: "gpt-4o-mini"
  temperature: 0.3
system_prompt: "You are a research assistant."
tools:
  - "web_search"
```

```yaml
# agents/manager.yaml
name: "manager"
description: "Senior analyst who delegates research and writing"
llm:
  model: "gpt-4o"
system_prompt: "Break down questions. Use your research and writing agents."

# These become agent_researcher and agent_writer tools
agents:
  - "researcher"
  - "writer"
```

```python
result = await Runner.run("agents/manager.yaml", messages)
```

### Create a New Skill

Skills are on-demand instructions that load only when needed. Each skill is a directory with a `SKILL.md`:

```text
skills/csv-insights/
├── SKILL.md            # Required: metadata + instructions
├── requirements.txt    # Optional: Python dependencies (auto-installed)
├── scripts/            # Optional: executable code
│   └── analyze.py
└── assets/             # Optional: reference files
```

```markdown
---
name: csv-insights
description: Summarize a CSV, compute basic stats, and produce a markdown report.
---

# CSV Insights Skill

## When to use this
- User provides a CSV and wants a summary, stats, or visualization

## How to run
python scripts/analyze.py --input <csv_path> --outdir output
```

Only the frontmatter (`name` + `description`) is loaded at startup. The full body loads when the LLM calls `load_skill`.

**Skill staging:** When a sandbox is configured, the skill directory (including `scripts/`) is automatically copied into the sandbox so the LLM can reference and execute scripts via shell commands.

**Auto-install:** If the skill directory contains a `requirements.txt`, dependencies are automatically installed into the sandbox venv before the first LLM call. The file uses standard [uv/pip requirements format](https://docs.astral.sh/uv/pip/packages/#installing-from-a-requirements-file).

Reference from an agent config:

```yaml
skills:
  - "csv-insights"
```

### Runtime Context

Pass additional resources at runtime via `context`:

```python
result = await Runner.run(
    "agents/my_agent.yaml",
    messages,
    context={
        "prompt_params": {
            "domain": "finance",
            "system": {"personality": "formal"},
        },
        "tools": ["extra_tool"],
        "agents": ["extra_agent"],
        "skills": ["extra_skill"],
    },
)
```

### Dynamic Resource Bundles

Inject full resource definitions at runtime (e.g. from an external service):

```python
agent_bundle = {
    "name": "specialist_v2",
    "type": "agent",
    "resources": {
        "config.yaml": 'name: "specialist_v2"\nllm:\n  model: "gpt-4o"\n...',
        "agent.py": 'class SpecialistV2(SimpleAgent):\n    ...',
    },
    "structure": {"config.yaml": "file", "agent.py": "file"},
}

result = await Runner.run(
    "agents/triage.yaml",
    messages,
    context={"agents": [agent_bundle]},
)
```

Bundles are materialized to the session sandbox as real files, then resolved identically to any other path.

## Sandboxed Execution

When tools or skills need to run shell commands, Fluxibly provides OS-level isolation using [Anthropic's sandbox-runtime (SRT)](https://github.com/anthropic-experimental/sandbox-runtime) and fast Python environment creation via [uv](https://docs.astral.sh/uv/).

Sandboxing is **lazy** — nothing is created until the first shell command runs. The sandbox is then reused for the entire session and cleaned up automatically by the Runner.

Install SRT for OS-level isolation (optional):

```bash
npm install -g @anthropic-ai/sandbox-runtime
brew install ripgrep   # Required by SRT (macOS)
```

If SRT is not installed, commands fall back to direct subprocess execution with a warning.

### Sandbox YAML config

```yaml
sandbox:
  use_uv: true                          # Create a uv venv in the sandbox (default: true)
  python_version: "3.11"                 # Python version for uv venv (default: system)
  packages: ["pandas", "matplotlib"]     # Pre-install packages on first use (prefer requirements.txt)
  timeout_seconds: 300                   # Max seconds per command (default: 300)
  allowed_domains:                       # Network allowlist (default: pypi.org, github.com)
    - "pypi.org"
    - "files.pythonhosted.org"
  deny_read:                             # Filesystem read denylist (default: ~/.ssh, ~/.aws, .env)
    - "~/.ssh"
    - "~/.aws"
  allow_write: ["/data/output"]          # Extra writable paths beyond sandbox_dir
```

> **Prefer `requirements.txt`**: Instead of listing packages in the sandbox config, place a `requirements.txt` in each tool or skill directory. Dependencies are auto-installed on first use, and each tool/skill owns its own deps.

### Programmatic usage

```python
from fluxibly import SandboxSession
from fluxibly.tools import ToolService

session = SandboxSession("/tmp/my-sandbox")
ts = ToolService()
ts.register_sandboxed_shell(session)  # Registers a "shell" function tool
```

See `fluxibly/tools/sandbox.py` and `examples/03_multi_agent.py` for details.

## Environment Setup

Copy `.env.example` to `.env` and fill in your values:

```bash
cp .env.example .env
```

The `.env` file is auto-loaded when you import `fluxibly`. At minimum, set your LLM provider API key:

```bash
# .env
OPENAI_API_KEY=sk-...
# or
ANTHROPIC_API_KEY=sk-ant-...
# or
GOOGLE_API_KEY=...
```

See `.env.example` for the full list of available settings (database, monitoring, etc.).

## Monitoring

Fluxibly includes built-in monitoring that records traces, spans, and tool calls to a PostgreSQL database. Monitoring is configured **exclusively via environment variables** in your `.env` file — never through agent YAML configs.

### 1. Set up the database

Create a PostgreSQL database for monitoring data:

```bash
createdb fluxibly_monitoring
```

### 2. Enable monitoring in `.env`

```bash
FLUXIBLY_MONITORING_ENABLED=true
FLUXIBLY_MONITORING_DASHBOARD_DB_HOST=localhost
FLUXIBLY_MONITORING_DASHBOARD_DB_PORT=5432
FLUXIBLY_MONITORING_DASHBOARD_DB_NAME=fluxibly_monitoring
FLUXIBLY_MONITORING_DASHBOARD_DB_USER=postgres
FLUXIBLY_MONITORING_DASHBOARD_DB_PASSWORD=your_password
```

When `FLUXIBLY_MONITORING_ENABLED=true`, all agents automatically pick up monitoring — no code changes needed. Tables are created automatically on first run.

### 3. Launch the monitoring dashboard

```bash
python -m fluxibly.monitoring.dashboard
```

The dashboard runs at `http://localhost:8555` by default. You can customize the host/port:

```bash
python -m fluxibly.monitoring.dashboard --host 0.0.0.0 --port 9000
```

Or via environment variables:

```bash
FLUXIBLY_MONITORING_DASHBOARD_HOST=0.0.0.0
FLUXIBLY_MONITORING_DASHBOARD_PORT=8555
```

## Custom Agents

Extend `AgentTemplate` to build custom agents with pre/post hooks:

```python
from fluxibly.agent import AgentTemplate, AgentConfig, AgentResponse

class MyAgent(AgentTemplate):
    async def pre_forward(self, messages, context):
        # Inject custom params before the pipeline runs
        context.setdefault("prompt_params", {})
        context["prompt_params"]["custom_var"] = "value"
        return messages, context

    async def post_forward(self, response, messages, context):
        # Inspect or modify the response
        return response
```

See `examples/04_custom_agent.py` for a full working example.

## Requirements

- Python 3.11+
- API keys for your chosen LLM provider
- **For sandboxed execution (optional):**
  - [SRT](https://github.com/anthropic-experimental/sandbox-runtime): `npm install -g @anthropic-ai/sandbox-runtime`
  - [ripgrep](https://github.com/BurningMind/ripgrep): `brew install ripgrep` (macOS) / `apt install ripgrep` (Linux) — required by SRT
  - [uv](https://docs.astral.sh/uv/): `pip install uv` or `brew install uv` — for fast venv creation

## License

MIT License — see [LICENSE](LICENSE) for details.
