Metadata-Version: 2.4
Name: definable
Version: 0.5.0
Summary: Production-grade AI agent framework with RAG, memory, tools, and multi-model support
License-Expression: Apache-2.0
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Typing :: Typed
Requires-Python: >=3.12
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: aiosqlite>=0.20.0
Requires-Dist: docstring-parser>=0.17.0
Requires-Dist: httpx>=0.28.1
Requires-Dist: mypy>=1.19.1
Requires-Dist: openai>=2.15.0
Requires-Dist: packaging>=23.0
Requires-Dist: pydantic>=2.12.5
Requires-Dist: prompt-toolkit>=3.0.0
Requires-Dist: rich>=14.2.0
Requires-Dist: ruff>=0.15.0
Requires-Dist: tiktoken>=0.12.0
Requires-Dist: voyageai>=0.3.7
Provides-Extra: discord
Requires-Dist: discord.py>=2.3.0; extra == "discord"
Provides-Extra: slack
Requires-Dist: slack-bolt>=1.21.0; extra == "slack"
Requires-Dist: aiohttp>=3.9.0; extra == "slack"
Provides-Extra: signal
Provides-Extra: interfaces
Requires-Dist: discord.py>=2.3.0; extra == "interfaces"
Requires-Dist: slack-bolt>=1.21.0; extra == "interfaces"
Requires-Dist: aiohttp>=3.9.0; extra == "interfaces"
Provides-Extra: readers
Requires-Dist: pypdf>=4.0.0; extra == "readers"
Requires-Dist: python-docx>=1.0.0; extra == "readers"
Requires-Dist: python-pptx>=1.0.0; extra == "readers"
Requires-Dist: openpyxl>=3.1.0; extra == "readers"
Requires-Dist: odfpy>=1.4.0; extra == "readers"
Requires-Dist: striprtf>=0.0.26; extra == "readers"
Provides-Extra: serve
Requires-Dist: fastapi>=0.115.0; extra == "serve"
Requires-Dist: uvicorn[standard]>=0.34.0; extra == "serve"
Provides-Extra: cron
Requires-Dist: croniter>=2.0.0; extra == "cron"
Provides-Extra: runtime
Requires-Dist: fastapi>=0.115.0; extra == "runtime"
Requires-Dist: uvicorn[standard]>=0.34.0; extra == "runtime"
Requires-Dist: croniter>=2.0.0; extra == "runtime"
Provides-Extra: jwt
Requires-Dist: pyjwt>=2.8.0; extra == "jwt"
Provides-Extra: postgres-memory
Requires-Dist: asyncpg>=0.29.0; extra == "postgres-memory"
Provides-Extra: redis-memory
Requires-Dist: redis>=5.0.0; extra == "redis-memory"
Provides-Extra: qdrant-memory
Requires-Dist: qdrant-client>=1.9.0; extra == "qdrant-memory"
Provides-Extra: chroma-memory
Requires-Dist: chromadb>=0.5.0; extra == "chroma-memory"
Provides-Extra: mongodb-memory
Requires-Dist: motor>=3.3.0; extra == "mongodb-memory"
Provides-Extra: pinecone-memory
Requires-Dist: pinecone>=5.0.0; extra == "pinecone-memory"
Provides-Extra: mem0-memory
Requires-Dist: mem0ai>=0.1.0; extra == "mem0-memory"
Provides-Extra: mistral-ocr
Requires-Dist: mistralai>=1.0.0; extra == "mistral-ocr"
Provides-Extra: mistral-ocr-images
Requires-Dist: mistralai>=1.0.0; extra == "mistral-ocr-images"
Requires-Dist: Pillow>=10.0.0; extra == "mistral-ocr-images"
Provides-Extra: research
Requires-Dist: ddgs>=9.0.0; extra == "research"
Requires-Dist: curl-cffi>=0.7.0; extra == "research"
Provides-Extra: qdrant
Requires-Dist: qdrant-client>=1.9.0; extra == "qdrant"
Provides-Extra: chroma
Requires-Dist: chromadb>=0.5.0; extra == "chroma"
Provides-Extra: pinecone
Requires-Dist: pinecone>=5.0.0; extra == "pinecone"
Provides-Extra: pgvector
Requires-Dist: psycopg[binary]>=3.1.0; extra == "pgvector"
Requires-Dist: pgvector>=0.3.0; extra == "pgvector"
Provides-Extra: mongodb
Requires-Dist: pymongo>=4.0.0; extra == "mongodb"
Provides-Extra: redis
Requires-Dist: redis>=5.0.0; extra == "redis"
Requires-Dist: redisvl>=0.3.0; extra == "redis"
Provides-Extra: anthropic
Requires-Dist: anthropic>=0.40.0; extra == "anthropic"
Provides-Extra: google
Requires-Dist: google-genai>=1.0.0; extra == "google"
Provides-Extra: ollama
Requires-Dist: ollama>=0.4.0; extra == "ollama"
Provides-Extra: mistral
Requires-Dist: mistralai>=1.0.0; extra == "mistral"
Provides-Extra: composio
Requires-Dist: composio>=1.0.0; extra == "composio"
Provides-Extra: browser
Requires-Dist: playwright>=1.49.0; extra == "browser"
Requires-Dist: aiohttp>=3.9.0; extra == "browser"
Provides-Extra: desktop
Requires-Dist: websockets>=12.0; extra == "desktop"
Provides-Extra: cli
Requires-Dist: textual>=1.0.0; extra == "cli"
Dynamic: license-file

<div align="center">

<h1>Definable</h1>

<p><strong>Build LLM agents that work in production.</strong></p>

<p>
  <a href="https://pypi.org/project/definable/"><img src="https://img.shields.io/pypi/v/definable?color=%2334D058&label=pypi" alt="PyPI"></a>
  <a href="https://pypi.org/project/definable/"><img src="https://img.shields.io/pypi/pyversions/definable?color=%2334D058" alt="Python"></a>
  <a href="https://github.com/definableai/definable.ai/blob/main/LICENSE"><img src="https://img.shields.io/github/license/definableai/definable.ai?color=%2334D058" alt="License"></a>
  <a href="https://pypi.org/project/definable/"><img src="https://img.shields.io/pypi/dm/definable?color=%2334D058&label=downloads" alt="Downloads"></a>
  <a href="https://github.com/definableai/definable.ai/actions/workflows/ci.yml"><img src="https://img.shields.io/github/actions/workflow/status/definableai/definable.ai/ci.yml?label=CI" alt="CI"></a>
</p>

<p>
  <a href="https://docs.definable.ai">Documentation</a> &nbsp;·&nbsp;
  <a href="https://github.com/definableai/definable.ai/tree/main/definable/examples">Examples</a> &nbsp;·&nbsp;
  <a href="https://pypi.org/project/definable/">PyPI</a>
</p>

</div>

<br>

A Python framework for building agent applications with tools, RAG, persistent memory, guardrails, skills, file readers, browser automation, messaging platform integrations, and the Model Context Protocol. Switch providers without rewriting agent code.

---

## Install

```bash
pip install definable
```

Or with [uv](https://github.com/astral-sh/uv):

```bash
uv pip install definable
```

## Quick Start

```python
from definable.agent import Agent
from definable.model.openai import OpenAIChat

agent = Agent(
  model=OpenAIChat(id="gpt-4o-mini"),
  instructions="You are a helpful assistant.",
)

output = agent.run("What is the capital of Japan?")
print(output.content)  # The capital of Japan is Tokyo.
```

Or use **string model shorthand** — no explicit import needed:

```python
from definable.agent import Agent

agent = Agent(model="gpt-4o-mini", instructions="You are a helpful assistant.")
output = agent.run("What is the capital of Japan?")
```

## Add Tools

```python
from definable.agent import Agent
from definable.tool.decorator import tool


@tool
def get_weather(city: str) -> str:
  """Get current weather for a city."""
  return f"Sunny, 72°F in {city}"


agent = Agent(
  model="gpt-4o-mini",
  tools=[get_weather],
  instructions="Help users check the weather.",
)

output = agent.run("What's the weather in Tokyo?")
```

The agent calls tools automatically. No manual function routing.

## Structured Output

```python
from pydantic import BaseModel
from definable.agent import Agent
from definable.tool.decorator import tool


@tool
def get_weather(city: str) -> str:
  """Get current weather for a city."""
  return f"Sunny, 72°F in {city}"


class WeatherReport(BaseModel):
  city: str
  temperature: float
  conditions: str


agent = Agent(model="gpt-4o-mini", tools=[get_weather])

output = agent.run("Weather in Tokyo?", output_schema=WeatherReport)
print(output.content)  # JSON string matching WeatherReport schema
```

Pass any Pydantic model to `output_schema` and get validated, typed results back.

## Streaming

```python
from definable.agent import Agent

agent = Agent(model="gpt-4o-mini", instructions="You are a helpful assistant.")

for event in agent.run_stream("Write a haiku about Python."):
  if event.content:
    print(event.content, end="", flush=True)
```

`run_stream()` yields events as they arrive — content chunks, tool calls, and completion signals.

## Multi-Turn Conversations

```python
from definable.agent import Agent

agent = Agent(model="gpt-4o-mini", instructions="You are a helpful assistant.")

output1 = agent.run("My name is Alice.")
output2 = agent.run("What's my name?", messages=output1.messages)
print(output2.content)  # "Your name is Alice."
```

Pass `messages` from a previous run to continue the conversation.

## Persistent Memory

```python
from definable.agent import Agent
from definable.memory import Memory, SQLiteStore

agent = Agent(
  model="gpt-4o-mini",
  memory=Memory(store=SQLiteStore("memory.db")),
  instructions="You are a personal assistant.",
)

await agent.arun("My name is Alice and I prefer dark mode.", user_id="alice")
# Later, even in a new session...
await agent.arun("What's my name?", user_id="alice")  # Recalls "Alice"
```

Memory stores session history automatically and summarizes when messages exceed `max_messages`. For quick testing, use `memory=True` for an in-memory store. Three backends available: SQLite, file-based, and in-memory.

## Knowledge Base (RAG)

```python
from definable.agent import Agent
from definable.knowledge import Knowledge, Document
from definable.embedder import OpenAIEmbedder
from definable.vectordb import InMemoryVectorDB

kb = Knowledge(
  vector_db=InMemoryVectorDB(),
  embedder=OpenAIEmbedder(),
  top_k=3,
)
kb.add(Document(content="Company vacation policy: 20 days PTO per year."))

agent = Agent(
  model="gpt-4o-mini",
  instructions="You are an HR assistant.",
  knowledge=kb,
)

output = agent.run("How many vacation days do I get?")
```

The agent retrieves relevant documents before responding. Supports embedders (OpenAI, Voyage), vector DBs (in-memory, PostgreSQL, Qdrant, ChromaDB, MongoDB, Redis, Pinecone), rerankers (Cohere), and chunkers.

> **Note:** `Agent(knowledge=True)` raises `ValueError` — unlike `memory=True`, knowledge requires explicit configuration with a vector DB.

## Guardrails

```python
from definable.agent import Agent
from definable.agent.guardrail import Guardrails, max_tokens, pii_filter, tool_blocklist
from definable.tool.decorator import tool


@tool
def get_weather(city: str) -> str:
  """Get current weather for a city."""
  return f"Sunny, 72°F in {city}"


agent = Agent(
  model="gpt-4o-mini",
  instructions="You are a support agent.",
  tools=[get_weather],
  guardrails=Guardrails(
    input=[max_tokens(500)],
    output=[pii_filter()],
    tool=[tool_blocklist({"dangerous_tool"})],
  ),
)

output = agent.run("What's the weather?")
```

Guardrails check, modify, or block content at input, output, and tool-call checkpoints. Built-ins include token limits, PII redaction, topic blocking, and regex filters. Compose rules with `ALL`, `ANY`, `NOT`, and `when()`.

## Skills

```python
from definable.agent import Agent
from definable.skill import Calculator, WebSearch, DateTime

agent = Agent(
  model="gpt-4o-mini",
  skills=[Calculator(), WebSearch(), DateTime()],
  instructions="You are a helpful assistant.",
)

output = agent.run("What is 15% of 230?")
```

Skills bundle domain expertise (instructions) with tools. Built-in skills include Calculator, WebSearch, DateTime, HTTPRequests, JSONOperations, TextProcessing, Shell, FileOperations, and MacOS. Create custom skills by subclassing `Skill`.

## MCP

```python
from definable.agent import Agent
from definable.mcp import MCPConfig, MCPServerConfig, MCPToolkit

config = MCPConfig(
  servers=[
    MCPServerConfig(
      name="filesystem",
      command="npx",
      args=["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
    )
  ]
)

async with MCPToolkit(config=config) as toolkit:
  agent = Agent(model="gpt-4o-mini", toolkits=[toolkit])
  await agent.arun("List files in /tmp")
```

Connect to any MCP server. Use the same tools as Claude Desktop.

## File Readers

```python
from definable.agent import Agent
from definable.media import File

agent = Agent(
  model="gpt-4o-mini",
  readers=True,
  instructions="Summarize the uploaded document.",
)

output = agent.run("Summarize this.", files=[File(filepath="report.pdf")])
```

Pass `readers=True` to enable automatic parsing. Supports PDF, DOCX, PPTX, XLSX, ODS, RTF, HTML, images, and audio. AI-powered OCR available via Mistral, OpenAI, Anthropic, and Google providers.

## Observability Dashboard

```python
from definable.agent import Agent

agent = Agent(
  model="gpt-4o-mini",
  observability=True,  # enables live dashboard at /obs/
  instructions="You are a helpful assistant.",
)

agent.serve(enable_server=True)
# Open http://localhost:8000/obs/ in your browser
```

Live events (SSE), session history, run comparison, per-tool and per-model metrics — all in a single-page dashboard. Use `ObservabilityConfig` for fine-grained control (trace dir, buffer size, theme).

## Deploy It

```python
from definable.agent import Agent
from definable.agent.trigger import Webhook, Cron
from definable.agent.auth import APIKeyAuth

agent = Agent(model="gpt-4o-mini", instructions="You are a support agent.")

agent.on(Webhook(path="/support", method="POST"))
agent.on(Cron(schedule="0 9 * * *"))
agent.auth = APIKeyAuth(keys={"sk-my-secret-key"})
agent.serve(host="0.0.0.0", port=8000, dev=True)
```

`agent.serve()` starts an HTTP server with registered webhooks, cron triggers, and interfaces in a single process. Add `dev=True` for hot-reload during development.

## Connect to Platforms

```python
from definable.agent import Agent
from definable.agent.interface.telegram import TelegramInterface

telegram = TelegramInterface(bot_token="BOT_TOKEN")

agent = Agent(model="gpt-4o-mini", instructions="You are a Telegram bot.")
agent.serve(telegram)
```

One agent, multiple platforms. Discord, Desktop, and CLI interfaces also available.

## Thinking (Reasoning Layer)

```python
from definable.agent import Agent
from definable.agent.reasoning import Thinking

agent = Agent(
  model="gpt-4o-mini",
  thinking=Thinking(),  # or thinking=True for defaults
  instructions="Think step by step.",
)

output = await agent.arun("What is 127 * 43?")
```

The thinking layer adds chain-of-thought reasoning before the final response.

## Tracing

```python
from definable.agent import Agent
from definable.agent.tracing import Tracing, JSONLExporter

agent = Agent(
  model="gpt-4o-mini",
  tracing=Tracing(exporters=[JSONLExporter("./traces")]),
  instructions="You are a helpful assistant.",
)

output = agent.run("Hello!")
# Traces saved to ./traces/{session_id}.jsonl
```

Or use `tracing=True` for default console tracing.

## Replay & Compare

```python
from definable.agent import Agent
from definable.model.openai import OpenAIChat

agent = Agent(model="gpt-4o-mini", instructions="You are a helpful assistant.")

# Inspect a past run
output = agent.run("Explain quantum computing.")
replay = agent.replay(run_output=output)
print(replay.steps)  # Each model call and tool invocation
print(replay.tokens)  # Token usage breakdown

# Re-run with a different model and compare
new_output = agent.replay(run_output=output, model=OpenAIChat(id="gpt-4o"))
comparison = agent.compare(output, new_output)
print(comparison.cost_diff)  # Cost difference between runs
print(comparison.token_diff)  # Token usage difference
```

Replay lets you inspect past runs, re-execute them with different models or instructions, and compare results side by side.

## Testing

```python
from definable.agent import Agent
from definable.agent.testing import MockModel

agent = Agent(
  model=MockModel(responses=["The capital of France is Paris."]),
  instructions="You are a geography expert.",
)

output = agent.run("What is the capital of France?")
assert "Paris" in output.content
```

`MockModel` returns canned responses — no API keys needed. Use it in unit tests to verify agent behavior deterministically.

---

## Features

| Category | Details |
|---|---|
| **Models** | OpenAI, DeepSeek, Moonshot, xAI, Anthropic, Mistral, Google Gemini, Perplexity, Ollama, OpenRouter, any OpenAI-compatible provider. String shorthand: `Agent(model="gpt-4o")` resolves automatically |
| **Agents** | Multi-turn conversations, structured output, configurable retries, max iterations |
| **Agentic Loop** | 8-phase pipeline, parallel tool calls via `asyncio.gather`, HITL pause/resume, cooperative cancellation, EventBus, hooks, ToolRetry, phase metrics |
| **Tools** | `@tool` decorator with automatic parameter extraction from type hints and docstrings |
| **Toolkits** | Composable tool groups, `KnowledgeToolkit` for explicit RAG search |
| **Skills** | Domain expertise + tools in one package; 9 built-in skills (incl. MacOS), custom `Skill` subclass |
| **Knowledge / RAG** | Embedders, vector DBs, rerankers (Cohere), chunkers, automatic retrieval |
| **Memory** | Session-history memory with auto-summarization |
| **Memory Stores** | SQLite, file-based, in-memory |
| **Readers** | PDF, DOCX, PPTX, XLSX, ODS, RTF, HTML, images, audio |
| **Reader Providers** | Mistral OCR, OpenAI, Anthropic, Google (AI-powered document parsing) |
| **Guardrails** | Input/output/tool checkpoints, PII redaction, token limits, topic blocking, regex filters |
| **Guardrails Composition** | `ALL`, `ANY`, `NOT`, `when()` combinators for complex policy rules |
| **Interfaces** | Telegram, Discord, Desktop, CLI, session management, identity resolution |
| **Browser Toolkit** | 55 browser automation tools via Playwright CDP — role-based refs, AI-friendly errors, console/network diagnostics |
| **Claude Code Agent** | Zero-dep subprocess wrapper for Claude Code CLI with full Definable ecosystem integration |
| **Runtime** | `agent.serve()`, webhooks, cron triggers, event triggers, `dev=True` hot-reload |
| **Auth** | `APIKeyAuth`, `JWTAuth`, `AllowlistAuth`, `CompositeAuth`, pluggable `AuthProvider` protocol |
| **Streaming** | Real-time response and tool call streaming |
| **Replay** | Inspect past runs, re-execute with overrides, `agent.compare()` for side-by-side diffs |
| **Middleware** | Request/response transforms via `agent.use()`, logging, retry, metrics |
| **Tracing** | JSONL trace export for debugging and analysis |
| **Observability** | Live dashboard at `/obs/` with real-time events (SSE), session browser, run comparison, tool/model metrics — `Agent(observability=True)` |
| **Thinking** | Chain-of-thought reasoning layer with configurable triggers |
| **Compression** | Automatic context window management for long conversations |
| **Testing** | `MockModel`, `AgentTestCase`, `create_test_agent` utilities |
| **MCP** | Model Context Protocol client for external tool servers |
| **Pipeline** | 8-phase execution (Prepare → Recall → Think → GuardInput → Compose → InvokeLoop → GuardOutput → Store), custom phases, hooks (before/after/instead) |
| **Debug Mode** | `Agent(debug=True)`, `DebugExporter` with color-coded model call breakdown, `DebugConfig` for step-mode inspection |
| **Sub-Agents** | `SubAgentPolicy`, `spawn_agent` tool, concurrent child agents with semaphore-limited execution |
| **Cancellation** | `CancellationToken` for cooperative cancellation, `AgentCancelled` exception |
| **Types** | Full Pydantic models, `py.typed` marker, mypy verified |

## Supported Models

```python
from definable.agent import Agent
from definable.model.openai import OpenAIChat        # GPT-4o, GPT-4o-mini, o1, o3, ...
from definable.model.deepseek import DeepSeekChat    # deepseek-chat, deepseek-reasoner
from definable.model.moonshot import MoonshotChat    # moonshot-v1-8k, moonshot-v1-128k
from definable.model.xai import xAI                  # grok-3, grok-2-latest
from definable.model.anthropic import Claude         # claude-sonnet-4-20250514, claude-haiku, ...
from definable.model.mistral import MistralChat      # mistral-large-latest, mistral-small, ...
from definable.model.google import Gemini            # gemini-2.0-flash-001, gemini-1.5-pro, ...
from definable.model.ollama import Ollama            # llama3, mistral, codellama, ...
from definable.model.openrouter import OpenRouter    # any model via OpenRouter
from definable.model.perplexity import Perplexity    # pplx-70b-online, pplx-7b-chat, ...

# Or use string shorthand — no model import needed:
agent = Agent(model="gpt-4o-mini")
agent = Agent(model="anthropic/claude-sonnet-4-20250514")
agent = Agent(model="google/gemini-2.0-flash-001")
```

Any OpenAI-compatible API works with `OpenAIChat(base_url=..., api_key=...)`. Anthropic, Mistral, Google, and Ollama use their native SDKs (optional deps). Perplexity and OpenRouter use the OpenAI-compatible interface.

## Optional Extras

Install only what you need:

```bash
pip install definable[readers]          # PDF, DOCX, PPTX, XLSX, ODS, RTF parsers
pip install definable[serve]            # FastAPI + Uvicorn for agent.serve()
pip install definable[cron]             # Cron trigger support
pip install definable[jwt]              # JWT authentication
pip install definable[runtime]          # serve + cron combined
pip install definable[discord]          # Discord interface
pip install definable[browser]          # Browser automation (Playwright CDP)
pip install definable[desktop]          # macOS Desktop Bridge
pip install definable[postgres-memory]  # PostgreSQL memory store
pip install definable[research]         # Deep research (DuckDuckGo + curl-cffi)
pip install definable[mistral-ocr]      # Mistral AI document parsing
pip install definable[mem0-memory]      # Mem0 hosted memory store
```

**Vector DB backends:**

```bash
pip install definable[pgvector]         # PostgreSQL + pgvector
pip install definable[qdrant]           # Qdrant
pip install definable[chroma]           # ChromaDB
pip install definable[mongodb]          # MongoDB
pip install definable[redis]            # Redis
pip install definable[pinecone]         # Pinecone
```

## Documentation

Full documentation: [docs.definable.ai](https://docs.definable.ai)

## Project Structure

```
definable/definable/
├── agent/              # Agent orchestration, config, middleware, loop
│   ├── auth/           # APIKeyAuth, JWTAuth, AllowlistAuth, CompositeAuth
│   ├── compression/    # Context window compression
│   ├── guardrail/      # Input/output/tool policy, PII, token limits, composable rules
│   ├── interface/      # Telegram, Discord, Desktop, CLI integrations
│   ├── observability/  # Live dashboard, metrics, trace browser, SSE events
│   ├── pipeline/       # 8-phase execution pipeline, hooks, ToolRetry, DebugConfig
│   ├── reasoning/      # Thinking layer (chain-of-thought)
│   ├── replay/         # Run inspection, re-execution, comparison
│   ├── research/       # Deep research: multi-wave web search, CKU, gap analysis
│   ├── run/            # RunOutput, RunEvent types
│   ├── runtime/        # AgentRuntime, AgentServer, dev mode
│   ├── tracing/        # JSONL trace export, DebugExporter
│   └── trigger/        # Webhook, Cron, EventTrigger
├── browser/            # BrowserToolkit — 55 tools via Playwright CDP
├── claude_code/        # ClaudeCodeAgent — subprocess wrapper for Claude Code CLI
├── knowledge/          # RAG: embedders, vector DBs, rerankers, chunkers
├── mcp/                # Model Context Protocol client
├── media.py            # Image, Audio, Video, File types
├── memory/             # Session-history memory + 3 store backends (SQLite, File, InMemory)
├── model/              # OpenAI, DeepSeek, Moonshot, xAI, Anthropic, Mistral, Gemini, Perplexity, Ollama, OpenRouter
├── reader/             # File parsers + AI reader providers
├── skill/              # Built-in + custom skills, skill registry
├── tool/               # @tool decorator, Function wrappers
├── toolkit/            # Toolkit base class
├── vectordb/           # Vector database interfaces (7 backends)
└── utils/              # Logging, supervisor, shared utilities
```

## Contributing

Contributions welcome! To get started:

1. Fork the repo and clone it locally
2. Install for development: `pip install -e .`
3. Make your changes — follow existing code patterns (2-space indentation, 150 char lines)
4. Add tests in `definable/tests/` for new features
5. Run `ruff check` and `ruff format` for linting
6. Run `mypy` for type checking
7. Open a pull request

See `definable/examples/` for usage patterns.

## License

Apache 2.0 — see [LICENSE](LICENSE) for details.
