Metadata-Version: 2.4
Name: pipelex
Version: 0.24.1
Summary: Execute composable AI methods declared in the MTHDS open standard
Project-URL: Homepage, https://pipelex.com
Project-URL: Repository, https://github.com/Pipelex/pipelex
Project-URL: Documentation, https://docs.pipelex.com/
Project-URL: Changelog, https://docs.pipelex.com/changelog/
Project-URL: MTHDS Standard, https://mthds.ai/latest/
Author-email: "Evotis S.A.S." <oss@pipelex.com>
Maintainer-email: Pipelex staff <oss@pipelex.com>
License-Expression: MIT
License-File: LICENSE
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Requires-Python: <3.15,>=3.10
Requires-Dist: aiofiles>=23.2.1
Requires-Dist: backports-strenum>=1.3.0; python_version < '3.11'
Requires-Dist: filetype>=1.2.0
Requires-Dist: httpx<1.0.0,>=0.23.0
Requires-Dist: instructor!=1.11.*,!=1.12.*,>=1.8.3
Requires-Dist: jinja2>=3.1.4
Requires-Dist: json2html>=1.3.0
Requires-Dist: kajson==0.3.1
Requires-Dist: markdown>=3.6
Requires-Dist: mthds>=0.2.0
Requires-Dist: networkx>=3.4.2
Requires-Dist: openai>=1.108.1
Requires-Dist: opentelemetry-api
Requires-Dist: opentelemetry-exporter-otlp-proto-http
Requires-Dist: opentelemetry-sdk
Requires-Dist: opentelemetry-semantic-conventions
Requires-Dist: pillow>=11.2.1
Requires-Dist: polyfactory>=2.21.0
Requires-Dist: portkey-ai>=2.1.0
Requires-Dist: posthog>=6.7.0
Requires-Dist: pydantic<3.0.0,>=2.10.6
Requires-Dist: pypdfium2!=4.30.1,<5.0.0,>=4.30.0
Requires-Dist: python-dotenv>=1.0.1
Requires-Dist: pyyaml>=6.0.2
Requires-Dist: reportlab<5,>=4.0
Requires-Dist: rich>=13.8.1
Requires-Dist: semantic-version>=2.10.0
Requires-Dist: shortuuid>=1.0.13
Requires-Dist: tomli>=2.3.0
Requires-Dist: tomlkit>=0.13.2
Requires-Dist: typer>=0.16.0
Requires-Dist: typing-extensions>=4.13.2
Provides-Extra: anthropic
Requires-Dist: anthropic>=0.78.0; extra == 'anthropic'
Provides-Extra: bedrock
Requires-Dist: aioboto3>=13.4.0; extra == 'bedrock'
Requires-Dist: boto3>=1.34.131; extra == 'bedrock'
Provides-Extra: dev
Requires-Dist: boto3-stubs>=1.35.24; extra == 'dev'
Requires-Dist: cryptography>=46.0.7; extra == 'dev'
Requires-Dist: moto[s3]>=5.0.0; extra == 'dev'
Requires-Dist: mypy==1.19.1; extra == 'dev'
Requires-Dist: pipelex-tools>=0.3.2; extra == 'dev'
Requires-Dist: pylint==4.0.4; extra == 'dev'
Requires-Dist: pyright==1.1.408; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.24.0; extra == 'dev'
Requires-Dist: pytest-cov>=6.1.1; extra == 'dev'
Requires-Dist: pytest-mock>=3.14.0; extra == 'dev'
Requires-Dist: pytest-sugar>=1.0.0; extra == 'dev'
Requires-Dist: pytest-xdist>=3.6.1; extra == 'dev'
Requires-Dist: pytest>=9.0.3; extra == 'dev'
Requires-Dist: ruff==0.14.13; extra == 'dev'
Requires-Dist: types-aioboto3[bedrock,bedrock-runtime]>=13.4.0; extra == 'dev'
Requires-Dist: types-aiofiles>=24.1.0.20240626; extra == 'dev'
Requires-Dist: types-markdown>=3.6.0.20240316; extra == 'dev'
Requires-Dist: types-networkx>=3.3.0.20241020; extra == 'dev'
Requires-Dist: types-pyyaml>=6.0.12.20250326; extra == 'dev'
Provides-Extra: docling
Requires-Dist: docling>=2.64.0; extra == 'docling'
Requires-Dist: lxml>=6.1.0; extra == 'docling'
Provides-Extra: docs
Requires-Dist: mike>=2.1.3; extra == 'docs'
Requires-Dist: mkdocs-glightbox>=0.4.0; extra == 'docs'
Requires-Dist: mkdocs-llmstxt-md>=0.2.0; extra == 'docs'
Requires-Dist: mkdocs-material>=9.6.14; extra == 'docs'
Requires-Dist: mkdocs-meta-manager>=1.1.0; extra == 'docs'
Requires-Dist: mkdocs-redirects==1.2.2; extra == 'docs'
Requires-Dist: mkdocs>=1.6.1; extra == 'docs'
Provides-Extra: fal
Requires-Dist: fal-client>=0.4.1; extra == 'fal'
Provides-Extra: gcp-storage
Requires-Dist: cryptography>=46.0.7; extra == 'gcp-storage'
Requires-Dist: google-cloud-storage>=2.10.0; extra == 'gcp-storage'
Provides-Extra: google
Requires-Dist: cryptography>=46.0.7; extra == 'google'
Requires-Dist: google-auth-oauthlib>=1.2.1; extra == 'google'
Provides-Extra: google-genai
Requires-Dist: cryptography>=46.0.7; extra == 'google-genai'
Requires-Dist: google-genai; extra == 'google-genai'
Requires-Dist: instructor[google-genai]; extra == 'google-genai'
Provides-Extra: huggingface
Requires-Dist: huggingface-hub<1.0.0,>=0.23; extra == 'huggingface'
Provides-Extra: linkup
Requires-Dist: linkup-sdk>=0.12.0; extra == 'linkup'
Provides-Extra: mistralai
Requires-Dist: mistralai>=1.12.0; extra == 'mistralai'
Provides-Extra: s3
Requires-Dist: aioboto3>=13.4.0; extra == 's3'
Requires-Dist: boto3>=1.34.131; extra == 's3'
Description-Content-Type: text/markdown

<div align="center">
  <a href="https://www.pipelex.com/"><img src="https://raw.githubusercontent.com/Pipelex/pipelex/main/.github/assets/logo.png" alt="Pipelex Logo" width="400" style="max-width: 100%; height: auto;"></a>

  <br/>
  <br/>
  <br/>
  <h2 align="center">Build & Run AI Methods</h2>
  <p align="center">A method is a reusable, typed AI procedure — declared in a <code>.mthds</code> file and executed by Pipelex.<br/>
Each step is explicit, each output is structured, and every run is repeatable.</p>


  <div>
    <a href="https://go.pipelex.com/demo"><strong>Demo</strong></a> -
    <a href="https://docs.pipelex.com/"><strong>Documentation</strong></a> -
    <a href="https://mthds.sh"><strong>Hub</strong></a> -
    <a href="https://github.com/Pipelex/pipelex/issues"><strong>Report Bug</strong></a> -
    <a href="https://github.com/Pipelex/pipelex/discussions"><strong>Feature Request</strong></a>
  </div>
  <br/>

  <p align="center">
    <a href="LICENSE"><img src="https://img.shields.io/badge/License-MIT-blue.svg" alt="MIT License"></a>
    <a href="https://github.com/Pipelex/pipelex/tree/main/tests"><img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/Pipelex/pipelex/main/.badges/tests.json" alt="Tests"></a>
    <a href="https://pypi.org/project/pipelex/"><img src="https://img.shields.io/pypi/v/pipelex?logo=pypi&logoColor=white&color=blue&style=flat-square"
     alt="PyPI – latest release"></a>
    <br/>
    <a href="https://go.pipelex.com/discord"><img src="https://img.shields.io/badge/Discord-5865F2?logo=discord&logoColor=white" alt="Discord"></a>
    <a href="https://www.youtube.com/@PipelexAI"><img src="https://img.shields.io/badge/YouTube-FF0000?logo=youtube&logoColor=white" alt="YouTube"></a>
    <a href="https://pipelex.com"><img src="https://img.shields.io/badge/Homepage-03bb95?logo=google-chrome&logoColor=white&style=flat" alt="Website"></a>
    <a href="https://github.com/Pipelex/pipelex-cookbook/tree/main"><img src="https://img.shields.io/badge/Cookbook-5a0dad?logo=github&logoColor=white&style=flat" alt="Cookbook"></a>
    <a href="https://docs.pipelex.com/"><img src="https://img.shields.io/badge/Docs-03bb95?logo=read-the-docs&logoColor=white&style=flat" alt="Documentation"></a>
    <a href="https://docs.pipelex.com/changelog/"><img src="https://img.shields.io/badge/Changelog-03bb95?logo=git&logoColor=white&style=flat" alt="Changelog"></a>
    <br/>
    <br/>
</div>

![Claude Code + Pipelex + MTHDS](https://raw.githubusercontent.com/Pipelex/pipelex/main/.github/assets/Claude-Code-Pipelex-MTHDS-Cursor.png)


## What a Method Looks Like

```toml
[pipe.summarize_article]
type    = "PipeLLM"
inputs  = { article = "Text", audience = "Text" }
output  = "Text"
prompt  = "Summarize $article in three bullet points for $audience."
```

From here, Pipelex handles model routing across 60+ models, structured output parsing, and pipeline orchestration.

## Why Methods?

| | |
|---|---|
| **Declarative** — Human-readable `.mthds` files that work across models | **Typed** — Semantic types: AI understands what you mean, every input/output connects with purpose |
| **Repeatable** — Deterministic orchestration with controlled room for AI creativity | **Composable** — Chain pipes into sequences, nest methods inside methods, share with the community |

# Quick Start

## With Claude Code (Recommended)

Paste this into Claude Code:

```
Install mthds: run bash -c 'npm install -g mthds && mthds-agent bootstrap && claude plugin marketplace add mthds-ai/mthds-plugins && claude plugin install mthds@mthds-plugins'. Then tell me to restart Claude Code.
```

## With Codex

Paste this into Codex:

```
Install mthds: run bash -c 'npm install -g mthds && mthds-agent bootstrap && bash <(curl -fsSL https://raw.githubusercontent.com/mthds-ai/mthds-plugins/main/bin/install-codex.sh)'. Then tell me to run /plugins, search for MTHDS, and install it.
```

Install the [VS Code extension](https://go.pipelex.com/vscode) for `.mthds` syntax highlighting and flowchart visualization.

## From the Terminal

```bash
npm install -g mthds
mthds-agent bootstrap
pipelex init
```

Install the [VS Code extension](https://go.pipelex.com/vscode) for `.mthds` syntax highlighting and flowchart visualization.

Verify everything is set up correctly:

```bash
pipelex doctor
```

## Python Package Only

If you just need the Pipelex runtime without agent integration:

```bash
uv tool install pipelex
pipelex init
```

## Configure AI Access

- **Pipelex Gateway (Recommended)** — Free credits, single API key for LLMs, OCR / document extraction, and image generation across all major providers. [Get your key](https://app.pipelex.com/), add `PIPELEX_GATEWAY_API_KEY=your-key-here` to `~/.pipelex/.env`, run `pipelex init`.
- **Bring Your Own Keys** — Use existing API keys from OpenAI, Anthropic, Google, Mistral, etc. See [Configure AI Providers](https://docs.pipelex.com/latest/setup/configure-ai-providers/).
- **Local AI** — Ollama, vLLM, LM Studio, or llama.cpp — no API keys required. See [Configure AI Providers](https://docs.pipelex.com/latest/setup/configure-ai-providers/).


# Real-World Example: CV Batch Screening

A production method that takes a stack of CVs and a job offer PDF, extracts and analyzes each, then scores how well each candidate matches the role.

**cv_batch_screening.mthds**

```toml
[pipe.batch_analyze_cvs_for_job_offer]
type = "PipeSequence"
description = """
Main orchestrator pipe that takes a bunch of CVs and a job offer in PDF format, and analyzes how they match.
"""
inputs = { cvs = "Document[]", job_offer_pdf = "Document" }
output = "CandidateMatch[]"
steps = [
  { pipe = "prepare_job_offer", result = "job_requirements" },
  { pipe = "process_cv", batch_over = "cvs", batch_as = "cv_pdf", result = "match_analyses" },
]
```

<details>
<summary><b>View concepts, supporting pipes, flowchart, and run commands</b></summary>

**Concepts:**

```toml
[concept.CandidateProfile]
description = "A structured summary of a job candidate's professional background extracted from their CV."

[concept.CandidateProfile.structure]
skills       = { type = "text", description = "Technical and soft skills possessed by the candidate", required = true }
experience   = { type = "text", description = "Work history and professional experience", required = true }
education    = { type = "text", description = "Educational background and qualifications", required = true }
achievements = { type = "text", description = "Notable accomplishments and certifications" }

[concept.JobRequirements]
description = "A structured summary of what a job position requires from candidates."

[concept.JobRequirements.structure]
required_skills  = { type = "text", description = "Skills that are mandatory for the position", required = true }
responsibilities = { type = "text", description = "Main duties and tasks of the role", required = true }
qualifications   = { type = "text", description = "Required education, certifications, or experience levels", required = true }
nice_to_haves    = { type = "text", description = "Preferred but not mandatory qualifications" }

[concept.CandidateMatch]
description = "An evaluation of how well a candidate fits a job position."

[concept.CandidateMatch.structure]
match_score        = { type = "number", description = "Numerical score representing overall fit percentage between 0 and 100", required = true }
strengths          = { type = "text", description = "Areas where the candidate meets or exceeds requirements", required = true }
gaps               = { type = "text", description = "Areas where the candidate falls short of requirements", required = true }
overall_assessment = { type = "text", description = "Summary evaluation of the candidate's suitability", required = true }
```

<details>
<summary><b>Click to view the supporting pipes implementation</b></summary>

```toml
[pipe.prepare_job_offer]
type = "PipeSequence"
description = """
Extracts and analyzes the job offer PDF to produce structured job requirements.
"""
inputs = { job_offer_pdf = "Document" }
output = "JobRequirements"
steps = [
  { pipe = "extract_one_job_offer", result = "job_offer_pages" },
  { pipe = "analyze_job_requirements", result = "job_requirements" },
]

[pipe.extract_one_job_offer]
type        = "PipeExtract"
description = "Extracts text content from the job offer PDF document"
inputs      = { job_offer_pdf = "Document" }
output      = "Page[]"
model       = "@default-text-from-pdf"

[pipe.analyze_job_requirements]
type = "PipeLLM"
description = """
Parses and summarizes the job requirements from the extracted job offer content, identifying required skills, responsibilities, qualifications, and nice-to-haves
"""
inputs = { job_offer_pages = "Page" }
output = "JobRequirements"
model = "$writing-factual"
system_prompt = """
You are an expert HR analyst specializing in parsing job descriptions. Your task is to extract and summarize job requirements into a structured format.
"""
prompt = """
Analyze the following job offer content and extract the key requirements for the position.

@job_offer_pages
"""

[pipe.process_cv]
type = "PipeSequence"
description = "Processes one application"
inputs = { cv_pdf = "Document", job_requirements = "JobRequirements" }
output = "CandidateMatch"
steps = [
  { pipe = "extract_one_cv", result = "cv_pages" },
  { pipe = "analyze_one_cv", result = "candidate_profile" },
  { pipe = "analyze_match", result = "match_analysis" },
]

[pipe.extract_one_cv]
type        = "PipeExtract"
description = "Extracts text content from the CV PDF document"
inputs      = { cv_pdf = "Document" }
output      = "Page[]"
model       = "@default-text-from-pdf"

[pipe.analyze_one_cv]
type = "PipeLLM"
description = """
Parses and summarizes the candidate's professional profile from the extracted CV content, identifying skills, experience, education, and achievements
"""
inputs = { cv_pages = "Page" }
output = "CandidateProfile"
model = "$writing-factual"
system_prompt = """
You are an expert HR analyst specializing in parsing and summarizing candidate CVs. Your task is to extract and structure the candidate's professional profile into a structured format.
"""
prompt = """
Analyze the following CV content and extract the candidate's professional profile.

@cv_pages
"""

[pipe.analyze_match]
type = "PipeLLM"
description = """
Evaluates how well the candidate matches the job requirements, calculating a match score and identifying strengths and gaps
"""
inputs = { candidate_profile = "CandidateProfile", job_requirements = "JobRequirements" }
output = "CandidateMatch"
model = "$writing-factual"
system_prompt = """
You are an expert HR analyst specializing in candidate-job fit evaluation. Your task is to produce a structured match analysis comparing a candidate's profile against job requirements.
"""
prompt = """
Analyze how well the candidate matches the job requirements. Evaluate their fit by comparing their skills, experience, and qualifications against what the position demands.

@candidate_profile

@job_requirements

Provide a comprehensive match analysis including a numerical score, identified strengths, gaps, and an overall assessment.
"""
```

</details>

<details>
<summary><b>View the pipeline flowchart</b></summary>

```mermaid
flowchart LR
    %% Pipe and stuff nodes within controller subgraphs
    subgraph sg_n_8b2136e3fe["batch_analyze_cvs_for_job_offer"]
        subgraph sg_n_91d5d6dc7c["prepare_job_offer"]
            n_fde22777cb["analyze_job_requirements"]
            s_f9f703fbb4(["job_requirements<br/>JobRequirements"]):::stuff
            n_b8469c838f["extract_one_job_offer"]
            s_d998350046(["job_offer_pages<br/>Page"]):::stuff
        end
        subgraph sg_n_f8d5afb7cd["process_cv_batch"]
            subgraph sg_n_6e53e16369["process_cv"]
                n_c18aded200["analyze_match"]
                s_5c911f7e54(["match_analysis<br/>CandidateMatch"]):::stuff
                n_a7ed00ac24["analyze_one_cv"]
                s_c5ae714e89(["candidate_profile<br/>CandidateProfile"]):::stuff
                n_d24f39aa60["extract_one_cv"]
                s_427beb5195(["cv_pdf<br/>Document"]):::stuff
                s_f1f80289df(["cv_pages<br/>Page"]):::stuff
            end
            subgraph sg_n_2cfb7a32c8["process_cv"]
                n_f6a25d1769["analyze_match"]
                s_ea99eee6ed(["match_analysis<br/>CandidateMatch"]):::stuff
                n_f48b73fbee["analyze_one_cv"]
                s_e1ffee913e(["candidate_profile<br/>CandidateProfile"]):::stuff
                n_d16f2fe381["extract_one_cv"]
                s_041bb18fb4(["cv_pdf<br/>Document"]):::stuff
                s_5fbba7194a(["cv_pages<br/>Page"]):::stuff
            end
            subgraph sg_n_08a7186be9["process_cv"]
                n_937e750ea4["analyze_match"]
                s_bb41a103f0(["match_analysis<br/>CandidateMatch"]):::stuff
                n_786a2969d5["analyze_one_cv"]
                s_c47fe821d7(["candidate_profile<br/>CandidateProfile"]):::stuff
                n_38f0cfd11c["extract_one_cv"]
                s_2634ece93d(["cv_pdf<br/>Document"]):::stuff
                s_44e253b325(["cv_pages<br/>Page"]):::stuff
            end
        end
    end

    %% Pipeline input stuff nodes (no producer)
    s_9b7e74ac51(["job_offer_pdf<br/>Document"]):::stuff

    %% Data flow edges: producer -> stuff -> consumer
    n_a7ed00ac24 --> s_c5ae714e89
    n_b8469c838f --> s_d998350046
    n_f48b73fbee --> s_e1ffee913e
    n_d16f2fe381 --> s_5fbba7194a
    n_fde22777cb --> s_f9f703fbb4
    n_d24f39aa60 --> s_f1f80289df
    n_38f0cfd11c --> s_44e253b325
    n_786a2969d5 --> s_c47fe821d7
    n_c18aded200 --> s_5c911f7e54
    n_f6a25d1769 --> s_ea99eee6ed
    n_937e750ea4 --> s_bb41a103f0
    s_c5ae714e89 --> n_c18aded200
    s_9b7e74ac51 --> n_b8469c838f
    s_d998350046 --> n_fde22777cb
    s_e1ffee913e --> n_f6a25d1769
    s_427beb5195 --> n_d24f39aa60
    s_041bb18fb4 --> n_d16f2fe381
    s_2634ece93d --> n_38f0cfd11c
    s_5fbba7194a --> n_f48b73fbee
    s_f9f703fbb4 --> n_c18aded200
    s_f9f703fbb4 --> n_f6a25d1769
    s_f9f703fbb4 --> n_937e750ea4
    s_f1f80289df --> n_a7ed00ac24
    s_44e253b325 --> n_786a2969d5
    s_c47fe821d7 --> n_937e750ea4

    %% Batch edges: list-item relationships
    s_52d84618d0(["match_analyses<br/>CandidateMatch"]):::stuff
    s_5c911f7e54 -."[0]".-> s_52d84618d0
    s_ea99eee6ed -."[1]".-> s_52d84618d0
    s_bb41a103f0 -."[2]".-> s_52d84618d0

    %% Style definitions
    classDef failed fill:#ffcccc,stroke:#cc0000
    classDef stuff fill:#fff3e6,stroke:#cc6600,stroke-width:2px
    classDef controller fill:#e6f3ff,stroke:#0066cc

    %% Subgraph depth-based coloring
    style sg_n_08a7186be9 fill:#fffde6
    style sg_n_2cfb7a32c8 fill:#fffde6
    style sg_n_6e53e16369 fill:#fffde6
    style sg_n_8b2136e3fe fill:#e6f3ff
    style sg_n_91d5d6dc7c fill:#e6ffe6
    style sg_n_f8d5afb7cd fill:#e6ffe6
```

</details>

### Run Your Method

**Via CLI:**

```bash
pipelex run bundle cv_batch_screening.mthds --inputs inputs.json
```

Create an `inputs.json` file with your PDF URLs:

```json
{
  "cvs": {
    "concept": "native.Document",
    "content": [
      { "url": "https://pipelex-web.s3.amazonaws.com/demo/John-Doe-CV.pdf" },
      { "path": "inputs/Jane-Smith-CV.pdf" }
    ]
  },
  "job_offer_pdf": {
    "concept": "native.Document",
    "content": {
      "url": "https://pipelex-web.s3.amazonaws.com/demo/Job-Offer.pdf"
    }
  }
}
```

**Via Python:**

```python
import asyncio
from pipelex.core.stuffs.document_content import DocumentContent
from pipelex.pipelex import Pipelex
from pipelex.pipeline.runner import PipelexRunner
# Generated by: `pipelex build structures bundle cv_batch_screening.mthds`
from structures.cv_batch_screening__candidate_match import CandidateMatch

async def run_pipeline() -> list[CandidateMatch]:
    runner = PipelexRunner()
    response = await runner.execute_pipeline(
        pipe_code="batch_analyze_cvs_for_job_offer",
        inputs={
            "cvs": {
                "concept": "Document",
                "content": [
                    DocumentContent(url="https://pipelex-web.s3.amazonaws.com/demo/John-Doe-CV.pdf"),
                    DocumentContent(path="inputs/Jane-Smith-CV.pdf"),
                ],
            },
            "job_offer_pdf": {
                "concept": "native.Document",
                "content": DocumentContent(url="https://pipelex-web.s3.amazonaws.com/demo/Job-Offer.pdf"),
            },
        },
    )
    pipe_output = response.pipe_output
    print(pipe_output)
    return pipe_output.main_stuff_as_items(item_type=CandidateMatch)

Pipelex.make()
asyncio.run(run_pipeline())
```

</details>


## See Pipelex in Action

**Claude Code builds your AI Method**

<a href="https://go.pipelex.com/demo">
  <img src="https://go.pipelex.com/demo-thumbnail" alt="Pipelex Demo" width="500" style="max-width: 100%; height: auto;">
</a>

## IDE Extension

We **highly** recommend installing our extension for `.mthds` syntax highlighting in your IDE:

- **VS Code**: Install from the [VS Code Marketplace](https://marketplace.visualstudio.com/items?itemName=pipelex.pipelex)
- **Cursor, Windsurf, and other VS Code forks**: Install from the [Open VSX Registry](https://open-vsx.org/extension/Pipelex/pipelex), or search for "Pipelex" directly in your extensions tab

Running `pipelex init` will also offer to install the extension automatically if it detects your IDE.

## Run Anywhere

The same `.mthds` file runs from multiple execution targets:

| Target | How |
|--------|-----|
| **CLI** | `pipelex run bundle method.mthds --inputs inputs.json` |
| **Python** | `PipelexRunner().execute_pipeline(...)` |
| **REST API** | Self-hosted API server |
| **MCP** | Model Context Protocol — agents call methods as tools |
| **n8n** | Pipelex node for workflow automation |


# The MTHDS Ecosystem

| | Description | Link |
|---|---|---|
| **MTHDS Standard** | The open standard specification — language, package system, and typed concepts | [mthds.ai](https://mthds.ai/latest/) |
| **MTHDS Hub** | Discover and share methods — browse packages, search by signature | [mthds.sh](https://mthds.sh) |
| **MTHDS Plugins** | Claude Code plugin — commands to build, run, edit, check, fix, and publish methods | [github.com/mthds-ai/mthds-plugins](https://github.com/mthds-ai/mthds-plugins) |
| **Package System** | Versioned dependencies, lock files with SHA-256 integrity, cross-package references via `->` | [Packages docs](https://mthds.ai/latest/packages/structure/) |
| **Know-How Graph** | Typed discovery — "I have X, I need Y" — find methods or chains by typed signature | [Know-How Graph](https://mthds.ai/latest/know-how-graph/) |

<details>
<summary><b>View MTHDS skills</b></summary>

| Command | Description |
|---------|-------------|
| `/mthds-build` | Build new AI method bundles from scratch |
| `/mthds-run` | Execute methods and interpret their JSON output |
| `/mthds-edit` | Modify existing methods — change pipes, update prompts, add steps |
| `/mthds-check` | Validate bundles for issues (read-only) |
| `/mthds-fix` | Auto-fix validation errors |
| `/mthds-explain` | Walk through execution flow in plain language |
| `/mthds-inputs` | Prepare inputs: templates, synthetic data, user files |
| `/mthds-install` | Install method packages from GitHub or local dirs |
| `/mthds-pkg` | Package management — init, deps, lock, install, update |
| `/mthds-publish` | Publish methods to the hub |
| `/mthds-share` | Share methods on social media |

</details>


## Examples & Cookbook

Explore real-world examples in our **Cookbook** repository:

[![GitHub](https://img.shields.io/badge/Cookbook-5a0dad?logo=github&logoColor=white&style=flat)](https://github.com/Pipelex/pipelex-cookbook/tree/main)

Clone it, fork it, and experiment with production-ready methods for various use cases.

## Optional Features

The package supports the following additional features:

- `anthropic`: Anthropic/Claude support for text generation
- `google`: Google models (Vertex) support for text generation
- `mistralai`: Mistral AI support for text generation and OCR
- `bedrock`: Amazon Bedrock support for text generation
- `fal`: Image generation with Black Forest Labs "FAL" service
- `linkup`: Web search with Linkup
- `docling`: OCR with Docling

Install all extras:

```bash
uv pip install "pipelex[anthropic,google,google-genai,mistralai,bedrock,fal,linkup,docling]"
```

---

**Privacy & Telemetry** — Pipelex Gateway collects only technical data (model names, token counts, latency) — never prompts or business data. If you want to avoid Gateway telemetry, disable `pipelex_gateway` and use your own provider keys or local AI instead. [Learn more](https://docs.pipelex.com/latest/setup/telemetry/)

**Contributing** — We welcome contributions! See our [Contributing Guidelines](CONTRIBUTING.md).

**Community** — [![Discord](https://img.shields.io/badge/Discord-5865F2?logo=discord&logoColor=white)](https://go.pipelex.com/discord) [GitHub Issues](https://github.com/Pipelex/pipelex/issues) · [Discussions](https://github.com/Pipelex/pipelex/discussions) · [Documentation](https://docs.pipelex.com/)

## License

This project is licensed under the [MIT license](LICENSE). Runtime dependencies are distributed under their own licenses via PyPI.

---

"Pipelex" is a trademark of Evotis S.A.S.

© 2025-2026 Evotis S.A.S.
