Metadata-Version: 2.4
Name: agentgovern
Version: 0.1.0b7
Summary: Compliance-as-code middleware for agentic AI workflows.
Project-URL: Homepage, https://agentgovern.zirahn.com
Project-URL: Repository, https://github.com/ahmedkhan-zirahn/agentgovern
Project-URL: Issues, https://github.com/ahmedkhan-zirahn/agentgovern/issues
Author-email: Azhar Khan <ahmed.khan@zirahn.com>
License: MIT
License-File: LICENSE
Keywords: agent-governance,agents,ai,compliance,eu-ai-act,governance,langchain,nist
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: System :: Monitoring
Requires-Python: >=3.9
Requires-Dist: httpx>=0.27.0
Requires-Dist: pydantic>=2.10.0
Requires-Dist: python-dotenv>=1.0.0
Provides-Extra: crewai
Requires-Dist: crewai>=0.80.0; extra == 'crewai'
Provides-Extra: dev
Requires-Dist: pytest-asyncio>=0.24.0; extra == 'dev'
Requires-Dist: pytest-cov>=6.0.0; extra == 'dev'
Requires-Dist: pytest>=8.3.0; extra == 'dev'
Requires-Dist: respx>=0.21.0; extra == 'dev'
Requires-Dist: ruff>=0.8.0; extra == 'dev'
Provides-Extra: langchain
Requires-Dist: langchain-core>=0.3.0; extra == 'langchain'
Provides-Extra: openai
Requires-Dist: openai>=1.50.0; extra == 'openai'
Description-Content-Type: text/markdown

# AgentGovern Python SDK

**Compliance-as-code for agentic AI workflows.**

> **Beta** — API may change before 1.0. [Report issues](https://github.com/ahmedkhan-zirahn/agentgovern/issues).

AgentGovern intercepts AI agent actions, evaluates them against configurable compliance policies (EU AI Act, NIST AI RMF, ISO 42001), and generates audit-ready evidence — in real-time. This SDK instruments your LangChain, CrewAI, or OpenAI Agents code with minimal changes.

## Install

```bash
pip install agentgovern
```

## Quickstart — LangChain

```python
import agentgovern
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_openai import ChatOpenAI

# 1. Initialize once at startup
agentgovern.init(
    api_key="ag_prod_...",           # from https://agentgovern.zirahn.com/settings/api-keys
    base_url="https://agentgovern.zirahn.com",
    environment="development",       # "production" | "staging" | "development"
)

# 2. Register your agent
agentgovern.register_agent(
    external_id="credit-scoring-v2",
    name="Credit Scoring Agent v2",
    framework="langchain",
)

# 3. Get the callback handler — binds to credit-scoring-v2 automatically
handler = agentgovern.instrument_langchain()

# 4. Pass it to your AgentExecutor — no other changes needed
llm = ChatOpenAI(model="gpt-4o")
agent = create_openai_tools_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, callbacks=[handler])

result = executor.invoke({"input": "Evaluate loan application for customer #12345"})
```

`instrument_langchain()` binds to the most recently registered agent. Every tool call,
LLM invocation, and agent step is captured, evaluated against your compliance policies,
and visible in the dashboard.

## Enforcement modes (current limitations)

AgentGovern supports four enforcement modes per policy:

| Mode     | Behavior (v0.1)                                          |
|----------|----------------------------------------------------------|
| warn     | Logs violation, agent continues                          |
| log      | Silently logs violation, agent continues                 |
| disabled | Policy not evaluated                                     |
| enforce  | Logs violation, agent **should** halt — see below        |

### Known limitation: enforce mode with LangChain (v0.1)

LangChain's callback machinery catches exceptions raised from callback handlers
and logs them as warnings rather than propagating them to halt the agent chain.
This means that when Gate 1 returns `action_taken='block'` for an enforce-mode
rule, our SDK correctly raises `PolicyViolation`, but LangChain swallows the
exception and the agent continues executing.

**Today:** enforce-mode rules log the violation to `input_evaluations` with full
regulatory citation. The audit trail is complete, but the agent chain does not halt.

**v0.2 (Q3 2026):** We are releasing a `ChatModel` wrapper that invokes Gate 1
before the LLM call (not as a callback), enabling real hard-block behavior.

**Workaround:** For customers who need hard-block today, call
`agentgovern.evaluate_input()` directly before invoking your agent, and check
`result.action_taken == 'block'` yourself:

```python
result = agentgovern.evaluate_input(agent_external_id="my-agent", prompt=user_prompt)
if result.action_taken == "block":
    raise HTTPException(status_code=403, detail="Prompt blocked by compliance policy")
agent.invoke({"input": user_prompt})
```

## Multiple agents in one process

If you run more than one agent in the same process, pass the agent ID explicitly to
avoid ambiguity:

```python
agentgovern.register_agent("fraud-detector", name="Fraud Detector")
agentgovern.register_agent("kyc-agent", name="KYC Agent")

handler_fraud = agentgovern.instrument_langchain("fraud-detector")
handler_kyc   = agentgovern.instrument_langchain("kyc-agent")

fraud_executor = AgentExecutor(agent=..., tools=..., callbacks=[handler_fraud])
kyc_executor   = AgentExecutor(agent=..., tools=..., callbacks=[handler_kyc])
```

## Manual instrumentation (all frameworks)

```python
from agentgovern.types import ActionType, ActionStatus

agentgovern.track_action(
    agent_external_id="my-agent-id",
    action_type=ActionType.TOOL_CALL,
    action_name="fetch_credit_bureau_data",
    status=ActionStatus.COMPLETED,
    duration_ms=312,
    input_payload={"bureau": "experian", "customer_id": "..."},
    output_payload={"fico_score": 720},
)
```

## Supported frameworks

| Framework | Auto-instrumentation | Status |
|-----------|---------------------|--------|
| LangChain | `instrument_langchain()` — wraps tool and LLM callbacks | Stable |
| CrewAI | Manual via `track_action()` | Beta |
| OpenAI Agents API | Manual via `track_action()` | Beta |

Auto-instrumentation for CrewAI and OpenAI Agents is on the roadmap.

## Compliance frameworks

| Framework | Status |
|-----------|--------|
| EU AI Act (High-Risk Systems) | Available |
| NIST AI RMF | Coming soon |
| ISO 42001 | Coming soon |

Enable policy packs from the [AgentGovern dashboard](https://agentgovern.zirahn.com).

## Configuration

| Parameter | Default | Description |
|-----------|---------|-------------|
| `api_key` | required | SDK ingest key from the dashboard |
| `base_url` | `https://agentgovern.zirahn.com` | API endpoint |
| `environment` | `"production"` | `"production"` \| `"staging"` \| `"development"` |
| `fail_silently` | `True` | If `True`, SDK errors never raise into your agent |

## Design guarantees

- `track_action()` returns in **< 5 ms** — all I/O is async in a background thread
- Buffer cap: 10,000 actions; oldest dropped when full
- Retry: 3 attempts with exponential backoff (1 s → 30 s max)
- If AgentGovern is unreachable, your agent continues unaffected

## Links

- **Dashboard:** https://agentgovern.zirahn.com
- **Documentation:** https://github.com/ahmedkhan-zirahn/agentgovern
- **Issues:** https://github.com/ahmedkhan-zirahn/agentgovern/issues

## License

MIT — Copyright (c) 2026 Zirahn
