Metadata-Version: 2.4
Name: cortexhub
Version: 0.2.9
Summary: CortexHub Python SDK- Runtime governance layer for AI Agents
Project-URL: Homepage, https://cortexhub.ai
Project-URL: Documentation, https://docs.cortexhub.ai
Project-URL: Examples, https://github.com/CortexHub-AI/examples/
Author-email: CortexHub <hello@cortexhub.ai>
License: MIT
License-File: LICENSE
Keywords: agents,ai,authorization,cedar,governance,policy
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: <3.13,>=3.10
Requires-Dist: cedarpy>=4.0.0
Requires-Dist: cryptography>=43.0.0
Requires-Dist: detect-secrets>=1.5.0
Requires-Dist: httpx>=0.28.0
Requires-Dist: opentelemetry-api>=1.20.0
Requires-Dist: opentelemetry-exporter-otlp-proto-http>=1.20.0
Requires-Dist: opentelemetry-sdk>=1.20.0
Requires-Dist: pip>=23.0
Requires-Dist: presidio-analyzer>=2.2.360
Requires-Dist: presidio-anonymizer>=2.2.360
Requires-Dist: pydantic>=2.9.0
Requires-Dist: python-dotenv>=1.0.0
Requires-Dist: spacy-lookups-data>=1.0.0
Requires-Dist: spacy>=3.8.0
Requires-Dist: structlog>=24.4.0
Provides-Extra: all
Requires-Dist: anthropic>=0.40.0; extra == 'all'
Requires-Dist: claude-agent-sdk>=0.0.1; extra == 'all'
Requires-Dist: crewai<1.0.0,>=0.50.0; extra == 'all'
Requires-Dist: langchain-core>=0.2.0; extra == 'all'
Requires-Dist: langchain-openai>=0.1.0; extra == 'all'
Requires-Dist: langgraph>=0.2.0; extra == 'all'
Requires-Dist: litellm>=1.81.5; extra == 'all'
Requires-Dist: openai-agents>=0.0.3; extra == 'all'
Provides-Extra: claude-agents
Requires-Dist: anthropic>=0.40.0; extra == 'claude-agents'
Requires-Dist: claude-agent-sdk>=0.0.1; extra == 'claude-agents'
Provides-Extra: crewai
Requires-Dist: crewai<1.0.0,>=0.50.0; extra == 'crewai'
Requires-Dist: litellm>=1.81.5; extra == 'crewai'
Provides-Extra: dev
Requires-Dist: mypy>=1.10.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.24.0; extra == 'dev'
Requires-Dist: pytest-cov>=5.0.0; extra == 'dev'
Requires-Dist: pytest>=8.0.0; extra == 'dev'
Requires-Dist: ruff>=0.4.0; extra == 'dev'
Provides-Extra: langgraph
Requires-Dist: langchain-core>=0.2.0; extra == 'langgraph'
Requires-Dist: langchain-openai>=0.1.0; extra == 'langgraph'
Requires-Dist: langgraph>=0.2.0; extra == 'langgraph'
Provides-Extra: openai-agents
Requires-Dist: openai-agents>=0.0.3; extra == 'openai-agents'
Description-Content-Type: text/markdown

# CortexHub Python SDK

**Runtime Governance for AI Agents** - Policy enforcement, PII/secrets detection, complete audit trails with OpenTelemetry.

## Installation

```bash
# Core SDK
pip install cortexhub

# With framework support (choose one or more)
pip install cortexhub[langgraph]      # LangGraph
pip install cortexhub[crewai]         # CrewAI
pip install cortexhub[openai-agents]  # OpenAI Agents SDK
pip install cortexhub[claude-agents]  # Claude Agent SDK

# All frameworks (for development)
pip install cortexhub[all]
```

Python support: 3.11–3.12. Python 3.13 is not supported.

## Quick Start

```python
from cortexhub import init, Framework

# Initialize CortexHub FIRST, before importing your framework
cortex = init(
    agent_id="customer_support_agent",
    framework=Framework.LANGGRAPH,  # or CREWAI, OPENAI_AGENTS, CLAUDE_AGENTS
    enable_mcp=True,  # default; disable if you don't use MCP
)

# Now import and use your framework
from langgraph.prebuilt import create_react_agent

# Continue with your LangGraph setup...
```

## Supported Frameworks

| Framework | Enum Value | Install |
|-----------|------------|---------|
| LangGraph | `Framework.LANGGRAPH` | `pip install cortexhub[langgraph]` |
| CrewAI | `Framework.CREWAI` | `pip install cortexhub[crewai]` |
| OpenAI Agents | `Framework.OPENAI_AGENTS` | `pip install cortexhub[openai-agents]` |
| Claude Agents | `Framework.CLAUDE_AGENTS` | `pip install cortexhub[claude-agents]` |

## Tracing Coverage

All frameworks emit `run.started` and `run.completed`/`run.failed` for each run.
Tool spans (`tool.invoke`) and model spans (`llm.call`) vary by SDK:

- **LangGraph**: tool calls via `BaseTool.invoke`, LLM calls via `BaseChatModel.invoke/ainvoke`
- **CrewAI**: tool calls via `CrewStructuredTool.invoke`/`BaseTool.run`, LLM calls via LiteLLM and `BaseLLM.call/acall`
- **OpenAI Agents**: tool calls via `function_tool`, LLM calls via `OpenAIResponsesModel` and `OpenAIChatCompletionsModel`
- **Claude Agents**: tool calls via `@tool` and built-in tool hooks; LLM calls run inside the Claude Code CLI and are not intercepted by the Python SDK

## Configuration

```bash
# Required: API key
export CORTEXHUB_API_KEY=ch_live_...

```

## Features

- **Policy Enforcement** - Cloud configuration, local evaluation
- **Decision Signing** - Ed25519 cryptographic signature on every governance decision; independently verifiable by anyone with the public key — no database access required
- **PII Detection** - 50+ entity types (full coverage on first run)
- **Secrets Detection** - 30+ secret types
- **Configurable Guardrails** - Select specific PII/secret types to redact
- **Custom Patterns** - Add company-specific regex patterns
- **OpenTelemetry** - Industry-standard observability
- **Framework Adapters** - Automatic interception for all major frameworks
- **MCP Interception** - Governs MCP tool calls without framework-specific hooks
- **Privacy Mode** - Metadata-only by default, safe for production
- **Offline Policy Cache** - Enforce last synced policies without backend connectivity

## Privacy Modes

```python
# Production (default) - only metadata sent
cortex = init(agent_id="...", framework=..., privacy=True)
# Sends: tool names, arg schemas, PII types detected
# Never: raw values, prompts, responses

# Development - full data for testing policies  
cortex = init(agent_id="...", framework=..., privacy=False)
# Also sends: raw args, results, prompts (for policy testing)
```

## MCP Interception

If your agent uses MCP servers, MCP interception is enabled by default:

```python
import cortexhub

cortex = cortexhub.init(
    agent_id="my-agent",
    framework=cortexhub.Framework.LANGGRAPH,
    enable_mcp=True,  # default
)
```

To enable MCP interception without a framework adapter:

```python
cortex = cortexhub.CortexHub(api_key="...")
cortex.enable_mcp()
```

## Offline Policy Cache

Persist policies locally to keep enforcement running if the backend is unreachable:

```bash
export CORTEXHUB_ALLOW_OFFLINE_ENFORCEMENT=true
export CORTEXHUB_POLICY_DIR="$HOME/.cortexhub/policies"
```

When enabled, the SDK loads the most recent policy bundle from disk if it cannot reach
the backend during initialization.

## Handling Governance Outcomes

Policies are created in the CortexHub dashboard. The SDK fetches and enforces them automatically. Wrap your agent's run call in a try/except to handle each outcome:

```python
import cortexhub

cortex = cortexhub.init("my-agent", cortexhub.Framework.LANGGRAPH)

# Your agent code is unchanged. The SDK intercepts tool calls transparently.
try:
    result = workflow.invoke(state, config)

except cortexhub.PolicyViolationError as e:
    # A policy explicitly denied a tool call.
    print(f"Blocked: {e.reasoning}")

except cortexhub.ApprovalRequiredError as e:
    # A tool requires human approval before it runs.
    # The SDK polls the control plane and resumes automatically when approved.
    result = await cortex.wait_for_approval_and_resume(e, workflow, config)

except cortexhub.ApprovalDeniedError as e:
    # A reviewer denied the request.
    print(f"Denied: {e.reason}")

except cortexhub.ThrottleError as e:
    # A rate-limit policy was triggered.
    print(f"Rate limited: {e.reasoning}")

except cortexhub.CircuitBreakError as e:
    # A circuit breaker opened (cost spike, anomalous volume, etc.).
    print(f"Circuit breaker: {e.reasoning}")
```

### How `wait_for_approval_and_resume` works

1. Polls the CortexHub control plane every few seconds until a decision is made.
2. When approved: marks the approval internally and calls `workflow.invoke(None, config)`.
   For LangGraph, the SDK uses `interrupt()` to checkpoint at the tool call node — the
   graph resumes with the exact same args, so no LLM re-run occurs and the approval
   is auto-detected. No call to `mark_approval_granted()` is needed.
3. If denied/expired: raises `ApprovalDeniedError`.
4. If the default timeout (300s) is exceeded with no decision: re-raises
   `ApprovalRequiredError` with the same `approval_id` so you can surface it to the user.

```python
# Optional: configure timeout
result = await cortex.wait_for_approval_and_resume(
    e, workflow, config,
    timeout=120,       # seconds to wait (default 300)
    poll_interval=3,   # seconds between polls (default 3)
)
```

### Per-framework patterns

**LangGraph** — `interrupt()` preserves state at the exact tool call; `invoke(None, config)`
resumes with the same args, auto-approved:

```python
# async
except cortexhub.ApprovalRequiredError as e:
    result = await cortex.wait_for_approval_and_resume(e, workflow, config)
```

**CrewAI** — sync framework; use the blocking `wait_for_approval()` helper, then retry:

```python
# sync
except cortexhub.ApprovalRequiredError as e:
    cortex.wait_for_approval(e)          # blocks until approved (or denied/timeout)
    result = crew.kickoff(inputs=inputs) # retry — same tool call auto-approved
```

**OpenAI Agents SDK** — async; wait for approval, then retry:

```python
# async
except cortexhub.ApprovalRequiredError as e:
    await cortex.wait_for_approval_and_resume(e)   # no workflow arg — just waits
    result = await Runner.run(agent, messages)      # retry
```

**Claude Agent SDK** — async; same pattern:

```python
# async
except cortexhub.ApprovalRequiredError as e:
    await cortex.wait_for_approval_and_resume(e)
    async for message in claude_agent_sdk.query(prompt, tools=tools):
        ...  # retry
```

**MCP** — async; retry the specific tool call:

```python
# async
except cortexhub.ApprovalRequiredError as e:
    await cortex.wait_for_approval_and_resume(e)
    result = await session.call_tool(tool_name, arguments)  # retry
```

### Why retrying works (for non-LangGraph frameworks)

When the same tool is called again with the same args, the SDK computes the same
`context_hash`. Because the approval was tracked in `_pending_approvals`, the SDK
automatically re-checks the backend status on the retry call — if approved, the
tool is allowed without creating a new approval record. No manual
`mark_approval_granted()` needed.

## Guardrail Configuration

Guardrails control what happens **after detection**. On first run, the SDK detects
all supported PII types. In the dashboard, you choose which detected
types to act on (redact/block/allow) for that agent.

Configure in the dashboard:

1. **Select types to act on**: Choose specific PII types (email, phone, etc.)
2. **Add custom patterns**: Regex for company-specific data (employee IDs, etc.)
3. **Choose action**: Redact, block, or monitor only

The SDK applies your configuration automatically for subsequent runs:

```python
# With guardrail policy active:
# Input prompt: "Contact john@email.com about employee EMP-123456"
# After redaction: "Contact [REDACTED-EMAIL_ADDRESS] about employee [REDACTED-CUSTOM_EMPLOYEE_ID]"
# Only configured types are redacted
```

## Important: Initialization Order

**Always initialize CortexHub FIRST**, before importing your framework:

```python
# ✅ CORRECT
from cortexhub import init, Framework
cortex = init(agent_id="my_agent", framework=Framework.LANGGRAPH)

from langgraph.prebuilt import create_react_agent  # Import AFTER init

# ❌ WRONG
from langgraph.prebuilt import create_react_agent  # Framework imported first
from cortexhub import init, Framework
cortex = init(...)  # Too late!
```

This ensures:
1. CortexHub sets up OpenTelemetry before frameworks that also use it
2. Framework decorators/classes are properly wrapped

## Architecture

```
Agent Decides → [CortexHub] → Agent Executes
                    │
              ┌─────┴─────┐
              │           │
         Policy      Guardrails
         Engine      (PII/Secrets)
              │           │
              └─────┬─────┘
                    │
            Decision Signing
            (Ed25519, per-span)
            Signed in your env
            before leaving it
                    │
              OpenTelemetry
               (to backend)
```

Every governance decision is signed **inside your environment**, before the span reaches CortexHub. The private key never leaves your process. The public key is registered with the backend and available at a public endpoint — so any auditor can independently verify any decision without database access.

## Development

```bash
cd python

# Install with all frameworks
uv sync --all-extras

# Run tests
uv run pytest

# Lint
uv run ruff check .
```

## Links

- [Documentation](https://docs.cortexhub.ai)
- [Dashboard](https://app.cortexhub.ai)
- [Examples](https://github.com/CortexHub-AI/examples/)

## License

MIT
