Metadata-Version: 2.4
Name: llm-meter
Version: 0.2.1
Summary: Accurate LLM usage & cost tracking for Python backends (FastAPI-native)
Project-URL: Homepage, https://github.com/doubledare704/llm-meter
Project-URL: Issues, https://github.com/doubledare704/llm-meter/issues
Author-email: Oleksii Ovdiienko <doubledare704@gmail.com>
License: MIT
License-File: LICENSE
Keywords: cost-tracking,fastapi,instrumentation,llm,middleware,openai,usage
Classifier: Development Status :: 4 - Beta
Classifier: Framework :: FastAPI
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.10
Requires-Dist: aiosqlite>=0.22.1
Requires-Dist: fastapi>=0.128.0
Requires-Dist: openai>=2.14.0
Requires-Dist: pydantic>=2.12.5
Requires-Dist: rich>=13.0.0
Requires-Dist: sqlalchemy[asyncio]>=2.0.45
Requires-Dist: typer>=0.21.0
Provides-Extra: excel
Requires-Dist: openpyxl>=3.1.5; extra == 'excel'
Description-Content-Type: text/markdown

# llm-meter 📊

[![PyPI Version](https://img.shields.io/pypi/v/llm-meter)](https://pypi.org/project/llm-meter/)
[![Python Version](https://img.shields.io/pypi/pyversions/llm-meter)](https://www.python.org/)
[![License](https://img.shields.io/github/license/doubledare704/llm-meter)](LICENSE)

**Accurate LLM usage & cost tracking for Python backends.**

`llm-meter` solves the "black box" of LLM costs by providing framework-native (FastAPI) instrumentation that attributes every token, cent, and millisecond to your business-level concepts (User ID, Feature, Endpoint).

---

## ⚡ 15-Minute Setup

```bash
# Using uv (recommended)
uv add llm-meter

# or pip
pip install llm-meter
```

### 1. Initialize & Instrument
```python
from fastapi import FastAPI
from llm_meter import LLMMeter, FastAPIMiddleware
from openai import OpenAI

# 1. Initialize SDK
meter = LLMMeter(
    storage_url="sqlite+aiosqlite:///llm_usage.db",
    providers={"openai": {"api_key": "YOUR_KEY"}}
)

app = FastAPI()

# 2. Add Middleton for automatic attribution
app.add_middleware(FastAPIMiddleware, meter=meter)

# 3. Wrap your client
client = meter.wrap_client(OpenAI())

@app.post("/generate")
async def generate(prompt: str):
    # This call is automatically tracked and attributed to "/generate"
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": prompt}]
    )
    return {"text": response.choices[0].message.content}
```

### 2. Inspect via CLI
```bash
# Get a high-level summary
llm-meter usage summary

# See which endpoint costs the most
llm-meter usage by-endpoint
```

---

## 🎯 Key Features

- **Accounting, not Observability:** Focuses on cost attribution and usage tracking, not heavy traces or prompt logging.
- **FastAPI Native:** Middleware handles `request_id` and context propagation automatically.
- **Async-Safe:** Powered by `contextvars` to ensure usage is correctly attributed even in complex async workflows.
- **Proxy-Free:** Works via SDK-level instrumentation (no network interception or latency overhead).
- **Self-Hosted:** You own your data. Supports SQLite (default) and PostgreSQL.

---

## ⚠️ v1 Limitations

- Supports OpenAI and Azure OpenAI.
- Batch token tracking (Streaming support coming in v1.1).
- No web UI (everything is available via CLI or SQL).

---

## 🛠 Contributing

We love contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details on how to get started.

---

## 📄 License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
