Metadata-Version: 2.4
Name: traigent
Version: 0.11.2
Summary: Enterprise-grade LLM optimization platform with advanced analytics and AI-powered insights
Author-email: Traigent Team <opensource@traigent.ai>
License-Expression: AGPL-3.0-only
Project-URL: Homepage, https://github.com/Traigent/Traigent
Project-URL: Documentation, https://docs.traigent.ai
Project-URL: Repository, https://github.com/Traigent/Traigent
Project-URL: Bug Tracker, https://github.com/Traigent/Traigent/issues
Project-URL: Changelog, https://github.com/Traigent/Traigent/blob/main/CHANGELOG.md
Keywords: llm,optimization,machine-learning,ai,hyperparameter-tuning,analytics,meta-learning,cost-optimization,anomaly-detection,predictive-analytics,enterprise,bayesian-optimization
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: System :: Monitoring
Classifier: Topic :: Scientific/Engineering :: Information Analysis
Requires-Python: >=3.11
Description-Content-Type: text/markdown
License-File: LICENSE
License-File: NOTICE
Requires-Dist: click>=8.0.0
Requires-Dist: rich>=12.0.0
Requires-Dist: aiohttp>=3.8.0
Requires-Dist: requests>=2.28.0
Requires-Dist: jsonschema>=4.0.0
Requires-Dist: cryptography>=46.0.5
Requires-Dist: backoff>=2.2.0
Requires-Dist: litellm<1.82.7,>=1.0.0
Requires-Dist: rank-bm25
Requires-Dist: optuna>=4.5.0
Requires-Dist: psutil>=5.8.0
Requires-Dist: pydantic>=2.0.0
Provides-Extra: analytics
Requires-Dist: numpy>=1.21.0; extra == "analytics"
Requires-Dist: pandas>=1.3.0; extra == "analytics"
Requires-Dist: matplotlib>=3.5.0; extra == "analytics"
Provides-Extra: bayesian
Requires-Dist: scikit-learn>=1.0.0; extra == "bayesian"
Requires-Dist: scipy>=1.7.0; extra == "bayesian"
Provides-Extra: integrations
Requires-Dist: langchain>=0.0.200; extra == "integrations"
Requires-Dist: langchain-core>=1.2.11; extra == "integrations"
Requires-Dist: langchain-community>=0.3.27; extra == "integrations"
Requires-Dist: langchain-anthropic>=0.2.0; extra == "integrations"
Requires-Dist: langchain-openai>=0.3.30; extra == "integrations"
Requires-Dist: langchain-chroma>=0.2.5; extra == "integrations"
Requires-Dist: langchain-text-splitters>=0.3.8; extra == "integrations"
Requires-Dist: langchain-google-genai>=2.1.4; extra == "integrations"
Requires-Dist: openai>=2.0.0; extra == "integrations"
Requires-Dist: anthropic>=0.18.0; extra == "integrations"
Requires-Dist: groq>=0.9.0; extra == "integrations"
Requires-Dist: google-genai>=0.8.0; extra == "integrations"
Requires-Dist: google-generativeai>=0.3.0; extra == "integrations"
Requires-Dist: rank_bm25>=0.2.2; extra == "integrations"
Requires-Dist: mlflow>=3.8.1; extra == "integrations"
Requires-Dist: wandb>=0.15.0; extra == "integrations"
Requires-Dist: python-dotenv>=1.0.0; extra == "integrations"
Requires-Dist: boto3>=1.28.0; extra == "integrations"
Requires-Dist: botocore>=1.31.0; extra == "integrations"
Requires-Dist: faiss-cpu>=1.7.0; sys_platform != "win32" and extra == "integrations"
Provides-Extra: dspy
Requires-Dist: dspy-ai>=2.5.0; extra == "dspy"
Provides-Extra: pydanticai
Requires-Dist: pydantic-ai-slim<2,>=1; extra == "pydanticai"
Provides-Extra: security
Requires-Dist: pyjwt>=2.12.0; extra == "security"
Requires-Dist: passlib>=1.7.4; extra == "security"
Requires-Dist: python-multipart>=0.0.22; extra == "security"
Requires-Dist: fastapi>=0.95.0; extra == "security"
Requires-Dist: starlette>=0.49.1; extra == "security"
Requires-Dist: uvicorn>=0.18.0; extra == "security"
Requires-Dist: redis>=4.0.0; extra == "security"
Requires-Dist: defusedxml>=0.7.1; extra == "security"
Requires-Dist: pyotp>=2.9.0; extra == "security"
Provides-Extra: visualization
Requires-Dist: matplotlib>=3.5.0; extra == "visualization"
Requires-Dist: plotly>=5.0.0; extra == "visualization"
Provides-Extra: hybrid
Requires-Dist: httpx[http2]>=0.24.0; extra == "hybrid"
Requires-Dist: claude-code-sdk>=0.0.14; extra == "hybrid"
Requires-Dist: mcp>=1.23.0; extra == "hybrid"
Provides-Extra: test
Requires-Dist: pytest>=7.0.0; extra == "test"
Requires-Dist: pytest-asyncio>=0.21.0; extra == "test"
Requires-Dist: pytest-cov>=4.0.0; extra == "test"
Requires-Dist: pytest-mock>=3.10.0; extra == "test"
Requires-Dist: pytest-timeout>=2.0.0; extra == "test"
Requires-Dist: pytest-xdist>=3.0.0; extra == "test"
Requires-Dist: coverage>=7.0.0; extra == "test"
Requires-Dist: ragas>=0.3.6; extra == "test"
Requires-Dist: rapidfuzz>=3.14.0; extra == "test"
Requires-Dist: hypothesis>=6.0.0; extra == "test"
Provides-Extra: tracing
Requires-Dist: opentelemetry-api<2.0.0,>=1.20.0; extra == "tracing"
Requires-Dist: opentelemetry-sdk<2.0.0,>=1.20.0; extra == "tracing"
Requires-Dist: opentelemetry-exporter-otlp<2.0.0,>=1.20.0; extra == "tracing"
Provides-Extra: dev
Requires-Dist: pytest>=7.0.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
Requires-Dist: pytest-cov>=4.0.0; extra == "dev"
Requires-Dist: pytest-mock>=3.10.0; extra == "dev"
Requires-Dist: pytest-xdist>=3.0.0; extra == "dev"
Requires-Dist: black>=26.3.1; extra == "dev"
Requires-Dist: isort>=5.10.0; extra == "dev"
Requires-Dist: flake8>=5.0.0; extra == "dev"
Requires-Dist: mypy>=1.0.0; extra == "dev"
Requires-Dist: types-PyYAML>=6.0.0; extra == "dev"
Requires-Dist: types-requests>=2.31.0; extra == "dev"
Requires-Dist: types-bleach>=6.0.0; extra == "dev"
Requires-Dist: pre-commit>=3.0.0; extra == "dev"
Requires-Dist: ruff>=0.1.0; extra == "dev"
Requires-Dist: bandit>=1.7.0; extra == "dev"
Requires-Dist: hypothesis>=6.100.0; extra == "dev"
Provides-Extra: docs
Requires-Dist: mkdocs>=1.4.0; extra == "docs"
Requires-Dist: mkdocs-material>=9.0.0; extra == "docs"
Requires-Dist: mkdocstrings[python]>=0.22.0; extra == "docs"
Provides-Extra: ml
Requires-Dist: traigent[analytics,bayesian]; extra == "ml"
Requires-Dist: numpy>=1.21.0; extra == "ml"
Requires-Dist: scipy>=1.7.0; extra == "ml"
Provides-Extra: deepeval
Requires-Dist: deepeval>=1.0.0; extra == "deepeval"
Provides-Extra: cloud
Requires-Dist: traigent[security]; extra == "cloud"
Requires-Dist: boto3>=1.28.0; extra == "cloud"
Provides-Extra: recommended
Requires-Dist: traigent[analytics,bayesian,hybrid,integrations,pydanticai,visualization]; extra == "recommended"
Provides-Extra: all
Requires-Dist: traigent[analytics,bayesian,hybrid,integrations,pydanticai,security,test,tracing,visualization]; extra == "all"
Provides-Extra: enterprise
Requires-Dist: traigent[analytics,bayesian,cloud,hybrid,integrations,ml,security,test,tracing,visualization]; extra == "enterprise"
Dynamic: license-file

# Traigent

**Traigent is an AI Agent infrastructure that allows companies to take AI agents out of the lab and deploy them at high scale with high confidence.**

**Our mission:** Anything you can measure, we can improve. Whether it's accuracy, speed of response, cost, or any other business metric — we bring strong results that deliver real business value.



<p align="center">
  <a href="https://github.com/Traigent/Traigent/actions/workflows/tests.yml"><img src="https://github.com/Traigent/Traigent/actions/workflows/tests.yml/badge.svg" alt="CI"></a>
  <a href="https://www.gnu.org/licenses/agpl-3.0"><img src="https://img.shields.io/badge/License-AGPL_v3-blue.svg" alt="License: AGPL-3.0"></a>
  <a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/python-3.11%2B-blue.svg" alt="Python 3.11+"></a>
  <a href="https://docs.traigent.ai"><img src="https://img.shields.io/badge/docs-traigent.ai-brightgreen.svg" alt="Docs"></a>
</p>

**Traigent is an AI Agent infrastructure that allows companies to take AI agents out of the lab and deploy them at high scale with high confidence.**

Our mission: **Anything you can measure, we can improve.** Whether it's accuracy, speed of response, cost, or any other business metric — we bring strong results that deliver real business value.

> **Runs multiple LLM trials** — use `TRAIGENT_MOCK_LLM=true` to test without spending money, or set `TRAIGENT_RUN_COST_LIMIT=2.0` to cap spend. See [Cost Management](#cost-management).

**Quick Install:**

macOS / Linux:

```bash
git clone https://github.com/Traigent/Traigent.git
cd Traigent

python3 -m venv .venv
source .venv/bin/activate
pip install -e ".[recommended]"
```

Windows PowerShell:

```powershell
git clone https://github.com/Traigent/Traigent.git
cd Traigent

python -m venv .venv
.venv\Scripts\Activate.ps1
pip install -e ".[recommended]"
```

For more options, see [Installation details](#installation).

**Try it now - no API keys needed:**

```bash
pip install "traigent[integrations]"
python -m traigent.examples.quickstart
```

Or from a source checkout:

```bash
python hello_world.py
```

**Here's what the quickstart does - one decorator, automatic optimization:**

```python
from langchain_openai import ChatOpenAI
import traigent

@traigent.optimize(
    configuration_space={
        "model": ["gpt-4o-mini", "gpt-4o"],
        "temperature": [0.0, 0.7, 1.0],
    },
    objectives=["accuracy"],
    eval_dataset="qa_samples.jsonl",
)
def answer(question: str) -> str:
    cfg = traigent.get_config()
    llm = ChatOpenAI(model=cfg["model"], temperature=cfg["temperature"])
    return llm.invoke(question).content
```

---

## Using it in your own code

Add `@traigent.optimize()` to any function that calls an LLM — no framework required:

```python
import traigent
import litellm                    # or openai, anthropic, requests …

@traigent.optimize(
    configuration_space={
        "model": ["gpt-4o-mini", "gpt-4o"],
        "temperature": [0.0, 0.7, 1.0],
    },
    objectives=["accuracy"],
    eval_dataset="path/to/your_evals.jsonl",
)
def your_function(question: str) -> str:
    cfg = traigent.get_config()
    response = litellm.completion(
        model=cfg["model"],
        temperature=cfg["temperature"],
        messages=[{"role": "user", "content": question}],
    )
    return response.choices[0].message.content
```

Works with any LLM provider — [OpenAI](https://platform.openai.com/docs), [Anthropic](https://docs.anthropic.com), [LiteLLM](https://github.com/BerriAI/litellm) (100+ providers), or plain HTTP calls.

<p align="center">
  <a href="https://portal.traigent.ai">Portal</a> &middot;
  <a href="docs/getting-started/GETTING_STARTED.md">Quickstart</a> &middot;
  <a href="examples/">Examples</a> &middot;
  <a href="docs/agent-skill.md">Skill</a> &middot;
  <a href="docs/walkthrough.md">Walkthrough</a>
</p>

---

## Choose Your Path

| Goal | Resource | Time |
|------|----------|------|
| **Get started quickly** | [Quick Start Guide](docs/getting-started/GETTING_STARTED.md) | 5 min |
| **Understand the architecture** | [Architecture Overview](#-architecture-overview) | 5 min |
| **Connect to Traigent Cloud** | [Cloud Setup](#-traigent-cloud) | 5 min |
| **Try examples locally, see them on the cloud** | [Mock walkthrough](walkthrough/mock/) (8 steps) → [Portal](https://portal.traigent.ai) | 15 min |
| **Read the full API reference** | [Decorator Reference →](docs/api-reference/decorator-reference.md) | — |

<details>
<summary>Full documentation index</summary>

| | |
| --- | --- |
| **Get started** | [Installation](docs/getting-started/installation.md) · [5-minute tutorial](docs/getting-started/GETTING_STARTED.md) |
| **User guides** | [Injection Modes](docs/user-guide/injection_modes.md) · [Configuration Spaces](docs/user-guide/configuration-spaces.md) · [Evaluation](docs/user-guide/evaluation_guide.md) |
| **Tunable Variable Language** | [TVL Guide](docs/user-guide/tuned_variables.md) |
| **Advanced** | [Agent Optimization](docs/user-guide/agent_optimization.md) · [Optuna Integration](docs/user-guide/optuna_integration.md) · [JS Bridge](docs/guides/js-bridge.md) |
| **API reference** | [Decorator Reference](docs/api-reference/decorator-reference.md) · [Constraint DSL](docs/features/constraint-dsl.md) |

</details>

---

<details>
<summary>🎬 See Traigent in Action — click to play demos</summary>

| Demo | |
|------|-|
| **LLM Agent Optimization** | [![Optimization Demo](docs/demos/output/optimize-still.svg)](docs/demos/output/optimize.svg) |
| **Optimization Callbacks** | [![Callbacks Demo](docs/demos/output/hooks-still.svg)](docs/demos/output/hooks.svg) |
| **Agent Configuration Hooks** | [![Agent Hooks Demo](docs/demos/output/github-hooks-still.svg)](docs/demos/output/github-hooks.svg) |

</details>

<details>
<summary>🏗️ Architecture Overview — how it works</summary>

1. **Suggest** — the optimizer proposes a configuration to test
2. **Inject** — Traigent overrides your function's parameters with the proposed config
3. **Evaluate** — your function runs against the dataset, scored by the evaluator
4. **Record** — results update the optimizer's model
5. **Repeat** — loop continues until budget/trials exhausted, then outputs results

![Architecture Overview](docs/demos/output/architecture.svg)

**[Read the full architecture guide →](docs/architecture/ARCHITECTURE.md)**

</details>

---

## 🚀 Walkthrough — 8 runnable examples

All examples run with `TRAIGENT_MOCK_LLM=true` — no API keys needed.

<details>
<summary>Show all 8 walkthrough steps</summary>

| # | Run | What you'll learn |
|---|-----|-------------------|
| 1 | `python walkthrough/mock/01_tuning_qa.py` | Basic model + temperature optimization |
| 2 | `python walkthrough/mock/02_zero_code_change.py` | Seamless mode — zero code changes to existing code |
| 3 | `python walkthrough/mock/03_parameter_mode.py` | Explicit config access via `traigent.get_config()` |
| 4 | `python walkthrough/mock/04_multi_objective.py` | Balance accuracy, cost, and latency |
| 5 | `python walkthrough/mock/05_rag_parallel.py` | RAG optimization with parallel evaluation |
| 6 | `python walkthrough/mock/06_custom_evaluator.py` | Define your own success metrics |
| 7 | `python walkthrough/mock/07_multi_provider.py` | Compare OpenAI, Anthropic, Google in one run |
| 8 | `python walkthrough/mock/08_privacy_modes.py` | Local-only privacy-first execution |

</details>

**[Browse reference examples →](examples/) · [Injection modes →](docs/user-guide/injection_modes.md)**

---

### ☁️ Traigent Cloud

Connect to [Traigent Portal](https://portal.traigent.ai) to view results, compare trials, and collaborate.

1. **Sign up** at [portal.traigent.ai](https://portal.traigent.ai) — verify your email to activate
2. **Create an API key** — click your name (top-right) → **API Keys** → **+ Create API Key**
3. **Connect** — run `traigent auth login` or set `export TRAIGENT_API_KEY="sk_..."`  <!-- pragma: allowlist secret -->
4. **Run** — results appear in the portal automatically

<details>
<summary>Credential priority and multi-provider setup</summary>

| Credential  | 1st (highest)                  | 2nd                    | 3rd (default)        |
|-------------|--------------------------------|------------------------|----------------------|
| API Key     | `TRAIGENT_API_KEY` env var     | Stored CLI credentials | None (local only)    |
| Backend URL | `TRAIGENT_BACKEND_URL` env var | Stored CLI credentials | `portal.traigent.ai` |

> **Tip:** No env vars needed after `traigent auth login` — the SDK picks up stored credentials automatically.

**Multi-provider optimization** — use [LiteLLM](https://github.com/BerriAI/litellm) to compare OpenAI, Anthropic, Google, Mistral, and 100+ providers:

```python
@traigent.optimize(
    configuration_space={
        "model": ["gpt-4o-mini", "claude-3-haiku-20240307", "gemini/gemini-pro"],
        "temperature": [0.1, 0.5, 0.9],
    },
    objectives=["accuracy", "cost"],
    eval_dataset="data/qa_samples.jsonl",
)
def multi_provider_agent(question: str) -> str:
    config = traigent.get_config()
    response = litellm.completion(
        model=config.get("model"),
        temperature=config.get("temperature"),
        messages=[{"role": "user", "content": question}],
    )
    return response.choices[0].message.content
```

</details>

---

## ✨ Key Features

| Feature | Description |
|---------|-------------|
| **Zero-code integration** | Add `@traigent.optimize()` to existing code — no refactoring |
| **Multi-algorithm** | Random, Grid, Bayesian (TPE, NSGA-II, CMA-ES) via Optuna |
| **Multi-objective** | Optimize accuracy, latency, cost, and custom metrics simultaneously |
| **Framework support** | LangChain, OpenAI SDK, Anthropic, LiteLLM, and any LLM provider |
| **Cost tracking** | Integrated tokencost library with 500+ model pricing |
| **Parallel execution** | Concurrent trials and example-level parallelism |
| **Error resilience** | Interactive pause on rate limits and budget caps — resume or stop gracefully |
| **Live progress** | Auto-enabled progress bar in interactive terminals (`progress_bar=False` to disable) |
| **Privacy-first** | Local execution mode keeps all data on your machine |

**[TraigentDemo →](https://github.com/Traigent/TraigentDemo)** — Streamlit playground, use cases, and research benchmarks

---

<details>
<summary>📦 Installation details, execution modes, CLI, and more</summary>

### Installation

Python 3.11+ on Linux, macOS, or Windows. For coordinated release validation, install from this repository source tree.

| Feature Set | Description |
|-------------|-------------|
| `[recommended]` | All user-facing features (default) |
| `[integrations]` | LangChain, OpenAI, Anthropic adapters |
| `[analytics]` | Visualization and analytics |
| `[bayesian]` | Bayesian optimization (TPE, NSGA-II) |
| `[all]` | Everything |

**[Full installation guide →](docs/getting-started/installation.md)**

Source install with `pip`:

```bash
git clone https://github.com/Traigent/Traigent.git
cd Traigent
python3 -m venv .venv
source .venv/bin/activate
pip install -e ".[recommended]"
```

### Cost Management

| Setting | How |
|---------|-----|
| Testing (no API calls) | `TRAIGENT_MOCK_LLM=true` |
| Cost Limit | `TRAIGENT_RUN_COST_LIMIT=2.0` (default: $2/run) |

Cost estimates are approximations. See [DISCLAIMER.md](DISCLAIMER.md) for details.

### Evaluation

Provide a JSONL dataset — Traigent scores outputs using semantic similarity by default:

```jsonl
{"input": {"question": "What is AI?"}, "output": "Artificial Intelligence"}
{"input": {"question": "Explain ML"}, "output": "Machine learning uses data and algorithms"}
```

- `input` (required): your function's parameter names as keys
- `output` (optional): expected output for accuracy scoring

**[Evaluation guide →](docs/guides/evaluation.md)** — custom evaluators, dataset formats, troubleshooting

### Execution Modes

| Mode | Status | Privacy | Algorithm | Best For |
|------|--------|---------|-----------|----------|
| **Local** (`edge_analytics`) | ✅ Available | ✅ Complete | All (Random/Grid/Bayesian/Optuna) | All use cases |
| **Hybrid** | ✅ Available | ✅ Execution local | All (Random/Grid/Bayesian/Optuna) | Balanced approach |
| **Cloud** | 🚧 Coming Soon | ⚠️ Metadata | Random/Grid/Bayesian | Production, teams |

**[Execution modes guide →](docs/guides/execution-modes.md)** — mode comparisons, privacy details, migration path

### Quick Reference

| Parameter | Where | Description |
|-----------|-------|-------------|
| `configuration_space` | `@traigent.optimize()` | Parameters to test (required) |
| `objectives` | `@traigent.optimize()` | Metrics to optimize for |
| `eval_dataset` | `@traigent.optimize()` | Dataset for evaluation |
| `algorithm` | `.optimize()` call | `"random"`, `"grid"`, `"bayesian"` |
| `max_trials` | `.optimize()` call | Number of configurations to test |
| `progress_bar` | `.optimize()` call | `True` / `False` / `None` (auto) — live progress bar |

### Injection Modes

| Mode | Best for | How |
|------|----------|-----|
| **Seamless** (default) | Existing codebases | Traigent intercepts `ChatOpenAI`, `as_retriever`, etc. — zero code changes |
| **Parameter** | New development | Receives `TraigentConfig` object with explicit `config.get("key")` access |

**[Injection modes guide →](docs/user-guide/injection_modes.md)**

### CLI

```bash
traigent optimize module.py -a grid -n 10   # Run optimization
traigent validate data.jsonl -o accuracy     # Validate dataset
traigent results                             # List past runs
traigent plot <name> -p progress             # Visualize results
traigent auth login                          # Authenticate with portal
traigent --help                              # Full command reference
```

### Troubleshooting

| Problem | Fix |
|---------|-----|
| `ModuleNotFoundError` | `pip install -e ".[recommended]"` or check venv is activated |
| 0.0% accuracy | Set `TRAIGENT_MOCK_LLM=true`, or check dataset format |
| Missing API keys | Copy `.env.example` to `.env`; or use mock mode |
| Permission errors | Create a fresh venv and reinstall dependencies |

</details>

---

## 🛠️ Development

```bash
python3 -m venv .venv && source .venv/bin/activate
pip install -e ".[all,dev]"              # Install with dev dependencies
TRAIGENT_MOCK_LLM=true pytest            # Run tests
make format && make lint                 # Format and lint
```

**[Architecture guide →](docs/architecture/ARCHITECTURE.md) · [Project structure →](docs/architecture/project-structure.md)**

## 🤝 Contributing

We welcome bug reports and feature requests via [GitHub Issues](https://github.com/Traigent/Traigent/issues). For security vulnerabilities, please email security@traigent.ai.

## 📄 License

This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0-only) - see the [LICENSE](LICENSE) file for details.

---

**[Get Started →](docs/getting-started/GETTING_STARTED.md)** | **[Examples →](examples/)** | **[Portal →](https://portal.traigent.ai)** | **[Skill →](docs/agent-skill.md)** | **[Walkthrough →](docs/walkthrough.md)** | **[GitHub Issues](https://github.com/Traigent/Traigent/issues)** | **[Discussions](https://github.com/Traigent/Traigent/discussions)**
