Metadata-Version: 2.4
Name: composecache
Version: 0.1.2
Summary: Compositional semantic caching for LLM APIs and RAG pipelines
Author: Rojan Upreti
License: MIT
Requires-Python: >=3.10
Requires-Dist: numpy>=1.26.0
Requires-Dist: openai>=1.30.0
Requires-Dist: pgvector>=0.3.2
Requires-Dist: psycopg[binary]>=3.1.19
Provides-Extra: dev
Requires-Dist: mypy>=1.11.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.23.0; extra == 'dev'
Requires-Dist: pytest>=8.2.0; extra == 'dev'
Requires-Dist: ruff>=0.6.0; extra == 'dev'
Description-Content-Type: text/markdown

# ComposeCache Python Package

Python package for compositional semantic caching over LLM requests.

## Quick Start

### 1. Prerequisites

- Python 3.10+
- PostgreSQL with `pgvector` enabled
- OpenAI API key

### 2. Install

```bash
python -m venv .venv
source .venv/bin/activate
pip install composecache
```

### 3. Configure Environment

```bash
export DATABASE_URL="postgresql://dev:dev@localhost:5432/composecache"
export OPENAI_API_KEY="sk-..."
```

ComposeCache automatically connects and runs its schema migration on first use.

### 4. Run

```bash
python - <<'PY'
from composecache import ComposeCache

cache = ComposeCache(
    database_url="postgresql://dev:dev@localhost:5432/composecache",  # or os.environ["DATABASE_URL"]
    openai_api_key="YOUR_OPENAI_KEY",  # or os.environ["OPENAI_API_KEY"]
)

request = {
    "model": "gpt-4o-mini",
    "messages": [{"role": "user", "content": "Compare GDP of France and Germany"}],
}

first = cache.complete(request)
second = cache.complete(request)

print("First call cache type:", first["cache_type"])
print("Second call cache type:", second["cache_type"])
print("Answer:", second["content"])
PY
```

### 5. Useful Response Fields

- `content`: model answer text
- `cache_type`: `exact`, `semantic`, `partial`, or `miss`
- `tokens_saved`: estimated tokens avoided via cache reuse
- `cost_saved`: estimated cost savings
