Metadata-Version: 2.4
Name: fixturify
Version: 0.1.25
Summary: A collection of convenient testing utilities for Python
Project-URL: Homepage, https://github.com/eleven-sea/pytools
Project-URL: Repository, https://github.com/eleven-sea/pytools
Project-URL: Issues, https://github.com/eleven-sea/pytools/issues
Author: eleven-sea
License-Expression: MIT
Keywords: fixtures,json,mocking,pytest,sql,testing
Classifier: Development Status :: 4 - Beta
Classifier: Framework :: Pytest
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Testing
Requires-Python: >=3.10
Requires-Dist: deepdiff>=6.0
Description-Content-Type: text/markdown

# fixturify

A collection of Python testing utilities for SQL, HTTP mocking, JSON fixtures, assertions, and object mapping.

## Installation

```bash
pip install fixturify
```

## Table of contents

- [sql](#sql) - Execute SQL files before/after tests
- [SqlAssert](#sqlassert) - Fluent database assertions
- [http](#http) - Record and replay HTTP calls
- [read](#read) - Inject JSON fixtures into tests
- [JsonAssert](#jsonassert) - Compare objects to JSON files
- [ObjectMapper](#objectmapper) - Bidirectional object-JSON mapping
- [Combined usage](#combined-usage) - All modules together, decorator stacking

---

# sql

Decorator for executing SQL files before and/or after tests.

```python
from fixturify import sql, Phase, SqlTestConfig
```

## Basic usage

```python
@sql(path="./setup.sql")
def test_something():
    # SQL executed before test
    pass
```

## Decorator parameters

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `path` | `str` | required | Path to SQL file (relative to test file) |
| `phase` | `Phase` | `Phase.BEFORE` | When to execute |
| `config` | `SqlTestConfig \| None` | `None` | Database config (or use fixture) |

## Execution phases

```python
@sql(path="./setup.sql", phase=Phase.BEFORE)
@sql(path="./cleanup.sql", phase=Phase.AFTER)
def test_with_cleanup():
    # setup.sql executed before test
    # cleanup.sql executed after test (even if test fails)
    pass
```

- `Phase.BEFORE` (default) - execute SQL before test
- `Phase.AFTER` - execute SQL after test, runs in `finally` block (always executes, even on failure)

## Database configuration

### Via pytest fixture (recommended)

```python
# conftest.py
import pytest
from fixturify import SqlTestConfig

@pytest.fixture
def sql_config() -> SqlTestConfig:
    return SqlTestConfig(
        driver="psycopg2",
        host="localhost",
        port=5432,
        database="testdb",
        user="postgres",
        password="postgres",
    )
```

Fixture is auto-discovered - no need to pass it to the decorator.

> **Important:** The return type annotation `-> SqlTestConfig` is **required** for auto-discovery. The discovery mechanism matches fixtures by their return type annotation, so without it the fixture will not be found.

### Directly in decorator

```python
config = SqlTestConfig(
    driver="psycopg2",
    host="localhost",
    database="testdb",
    user="postgres",
    password="postgres",
)

@sql(path="./setup.sql", config=config)
def test_something():
    pass
```

## SqlTestConfig

| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `driver` | `str` | yes | Database driver name |
| `host` | `str` | yes | Database host |
| `database` | `str` | yes | Database name |
| `user` | `str` | yes | Username |
| `password` | `str` | yes | Password |
| `port` | `int \| None` | no | Port (uses driver default if omitted) |

Driver name is normalized: leading `+` and whitespace are stripped (e.g. `"postgresql+psycopg2"` becomes `"psycopg2"`).

## Supported drivers

| Driver | Database | Type | Default port |
|--------|----------|------|--------------|
| `psycopg2` | PostgreSQL | sync | 5432 |
| `psycopg` | PostgreSQL | async | 5432 |
| `asyncpg` | PostgreSQL | async | 5432 |
| `mysql.connector` | MySQL | sync | 3306 |
| `aiomysql` | MySQL | async | 3306 |
| `sqlite3` | SQLite | sync | - |
| `aiosqlite` | SQLite | async | - |

## Async functions

Decorator automatically handles async test functions. Sync drivers in async tests run via thread pool executor. Async drivers in sync tests use a persistent background event loop.

```python
@sql(path="./setup.sql")
async def test_async():
    # Works with asyncpg, aiomysql, aiosqlite
    pass
```

## Multiple decorators

Decorators execute top-to-bottom for BEFORE, bottom-to-top for AFTER:

```python
@sql(path="./first.sql")   # 1st executed
@sql(path="./second.sql")  # 2nd executed
@sql(path="./cleanup.sql", phase=Phase.AFTER)  # executed after test
def test_order():
    pass
```

## Connection caching

Connections are cached and reused between tests based on driver/host/database/user/port. Async connections are invalidated if the event loop changes. All connections are cleaned up via `atexit` handlers.

## Path resolution

Paths are resolved relative to the test file:

```
tests/
  test_users.py      <- @sql(path="./fixtures/setup.sql")
  fixtures/
    setup.sql        <- this file will be used
```

---

# SqlAssert

Fluent API for database state assertions in tests.

```python
from fixturify import sql, SqlAssert, SqlTestConfig
```

## Basic usage

```python
@sql(path="./setup.sql")
def test_user_creation(sql_assert: SqlAssert):
    create_user(name="John", email="john@example.com")

    sql_assert.table("users").where(name="John").exists()
```

## Configuration

`sql_assert` fixture requires a `sql_config` fixture returning `SqlTestConfig`:

```python
# conftest.py
import pytest
from fixturify import SqlTestConfig

@pytest.fixture
def sql_config() -> SqlTestConfig:
    return SqlTestConfig(
        driver="psycopg2",
        host="localhost",
        port=5432,
        database="testdb",
        user="postgres",
        password="postgres",
    )
```

The `sql_assert` fixture is automatically available when `sql_config` is defined.

## Entry points

```python
sql_assert.table("users")           # TableAssert - fluent builder for table queries
sql_assert.raw(sql, params=None)    # RawQueryAssert - custom SQL queries
```

## WHERE conditions

### where(**conditions)

Add equality conditions combined with AND:

```python
sql_assert.table("users").where(name="John", role="admin").exists()
# WHERE name = 'John' AND role = 'admin'
```

### Chaining where()

Multiple `where()` calls are combined with AND:

```python
sql_assert.table("users")\
    .where(role="admin")\
    .where(is_active=True)\
    .exists()
# WHERE role = 'admin' AND is_active = True
```

### where_null(column)

```python
sql_assert.table("users").where_null("deleted_at").count(5)
# WHERE deleted_at IS NULL
```

### where_not_null(column)

```python
sql_assert.table("users").where_not_null("email").exists()
# WHERE email IS NOT NULL
```

### where() with lists (IN)

Pass a list as a value to generate an `IN (...)` clause:

```python
sql_assert.table("users").where(id=[1, 2, 3]).exists()
# WHERE id IN (1, 2, 3)

sql_assert.table("users").where(id=[1, 2, 3], role="admin").fetch_all()
# WHERE id IN (1, 2, 3) AND role = 'admin'
```

Empty lists raise `ValueError` (SQL `IN ()` is invalid).

## Ordering and limiting

### order_by(column, desc=False)

```python
first_user = sql_assert.table("users").order_by("created_at").fetch_one()
latest_user = sql_assert.table("users").order_by("created_at", desc=True).fetch_one()
```

### limit(n)

```python
top_5 = sql_assert.table("users").order_by("score", desc=True).limit(5).fetch_all()
```

## Existence assertions

### exists()

Assert at least one row matches:

```python
sql_assert.table("users").where(name="John").exists()
```

### not_exists()

Assert no rows match:

```python
sql_assert.table("users").where(name="Deleted").not_exists()
```

## Count assertions

```python
sql_assert.table("users").count(5)       # exactly 5
sql_assert.table("users").count_gt(0)    # more than 0
sql_assert.table("users").count_gte(1)   # at least 1
sql_assert.table("users").count_lt(100)  # less than 100
sql_assert.table("users").count_lte(10)  # at most 10
```

## Value assertions

### has(**fields)

Assert first matching row has specified values:

```python
sql_assert.table("users").where(id=1).has(
    name="John",
    email="john@example.com",
    role="admin"
)
```

### has_all(**fields)

Assert ALL matching rows have specified values:

```python
sql_assert.table("users").where(role="admin").has_all(is_active=True)
```

### has_any(**fields)

Assert ANY matching row has specified values:

```python
sql_assert.table("users").has_any(email="john@example.com")
```

## Fetching data

### fetch_one()

Fetch first matching row as dict:

```python
row = sql_assert.table("users").where(id=1).fetch_one()
# {'id': 1, 'name': 'John', 'email': 'john@example.com'}
```

### fetch_one(TargetClass)

Fetch and map to object using ObjectMapper:

```python
from dataclasses import dataclass

@dataclass
class User:
    id: int
    name: str
    email: str

user = sql_assert.table("users").where(id=1).fetch_one(User)
assert isinstance(user, User)
```

Works with dataclasses, Pydantic models, SQLAlchemy models, SQLModel, and plain objects. Maps only flat column data from the table row - relationships are not loaded (use `raw()` with JOIN for that).

### fetch_all()

Fetch all matching rows as list of dicts:

```python
rows = sql_assert.table("users").fetch_all()
```

### fetch_all(TargetClass)

Fetch all and map to objects:

```python
users = sql_assert.table("users").fetch_all(User)
assert all(isinstance(u, User) for u in users)
```

### fetch_value(column)

Fetch single value from first matching row:

```python
email = sql_assert.table("users").where(id=1).fetch_value("email")
# "john@example.com"
```

Returns `None` if no rows match or column not found.

## Method chaining

All query-building methods return new instances (immutable), enabling safe chaining:

```python
sql_assert.table("users")\
    .where(role="admin")\
    .where(is_active=True)\
    .where_not_null("email")\
    .order_by("name")\
    .limit(10)\
    .fetch_all(User)
```

Chaining creates independent queries:

```python
base = sql_assert.table("users")
admins = base.where(role="admin")
active_admins = admins.where(is_active=True)

# Each query is independent
base.count(100)           # all users
admins.count(10)          # only admins
active_admins.count(8)    # only active admins
```

## Raw SQL queries

For complex queries, use `raw()`:

```python
# Custom query with parameters
result = sql_assert.raw(
    "SELECT COUNT(*) as cnt FROM users WHERE created_at > %s",
    ["2024-01-01"]
).fetch_one()
assert result["cnt"] > 0

# Existence check with JOIN
sql_assert.raw(
    "SELECT 1 FROM users u JOIN orders o ON u.id = o.user_id WHERE o.total > %s",
    [1000]
).exists()

# Non-existence check
sql_assert.raw(
    "SELECT 1 FROM users WHERE email LIKE %s",
    ["%@banned.com"]
).not_exists()
```

Parameter placeholders are automatically converted per driver (`%s` for psycopg2/mysql, `$1` for asyncpg, `?` for sqlite3).

## Async support

For async tests, use methods with `a` prefix:

```python
@sql(path="./setup.sql")
async def test_async(sql_assert: SqlAssert):
    await sql_assert.table("users").where(name="John").aexists()
    await sql_assert.table("users").where(name="Deleted").anot_exists()
    await sql_assert.table("users").acount(5)
    await sql_assert.table("users").acount_gt(0)
    await sql_assert.table("users").acount_gte(1)
    await sql_assert.table("users").acount_lt(100)
    await sql_assert.table("users").acount_lte(10)

    await sql_assert.table("users").where(id=1).ahas(name="John")
    await sql_assert.table("users").where(role="admin").ahas_all(is_active=True)
    await sql_assert.table("users").ahas_any(email="john@example.com")

    user = await sql_assert.table("users").where(id=1).afetch_one(User)
    users = await sql_assert.table("users").afetch_all(User)
    email = await sql_assert.table("users").where(id=1).afetch_value("email")
```

### Async methods reference

| Sync | Async |
|------|-------|
| `exists()` | `aexists()` |
| `not_exists()` | `anot_exists()` |
| `count(n)` | `acount(n)` |
| `count_gt(n)` | `acount_gt(n)` |
| `count_gte(n)` | `acount_gte(n)` |
| `count_lt(n)` | `acount_lt(n)` |
| `count_lte(n)` | `acount_lte(n)` |
| `has(**fields)` | `ahas(**fields)` |
| `has_all(**fields)` | `ahas_all(**fields)` |
| `has_any(**fields)` | `ahas_any(**fields)` |
| `fetch_one()` | `afetch_one()` |
| `fetch_all()` | `afetch_all()` |
| `fetch_value(col)` | `afetch_value(col)` |

## Relationships

`fetch_one(TargetClass)` and `fetch_all(TargetClass)` map only flat column data from the queried table. Relationships are not loaded. For related data, use manual queries or `raw()` with JOIN:

```python
# Manual queries
user = sql_assert.table("users").where(id=1).fetch_one()
orders = sql_assert.table("orders").where(user_id=user["id"]).fetch_all()

# Using raw() with JOIN
result = sql_assert.raw("""
    SELECT u.id, u.name, COUNT(o.id) as order_count
    FROM users u
    LEFT JOIN orders o ON u.id = o.user_id
    WHERE u.id = %s
    GROUP BY u.id, u.name
""", [1]).fetch_one()
```

## Error messages

Clear assertion errors with context:

```
AssertionError: Expected at least 1 row in 'users' where name='John', found 0
```

```
AssertionError: Expected 5 rows in 'orders' where status='pending', found 3
```

```
AssertionError: Row in 'users' doesn't match expected values:
  email: expected 'john@example.com', got 'jane@example.com'
  role: expected 'admin', got 'user'
```

## Full example

```python
from dataclasses import dataclass
from fixturify import sql, Phase, SqlAssert

@dataclass
class User:
    id: int
    name: str
    email: str
    role: str

@sql(path="./fixtures/setup.sql")
@sql(path="./fixtures/cleanup.sql", phase=Phase.AFTER)
def test_order_processing(sql_assert: SqlAssert):
    # Arrange
    sql_assert.table("orders").where(status="pending").count(1)

    # Act
    process_pending_orders()

    # Assert
    sql_assert.table("orders").where(status="pending").not_exists()
    sql_assert.table("orders").where(status="completed").exists()

    order = sql_assert.table("orders").where(id=1).fetch_one()
    assert order["status"] == "completed"

    sql_assert.table("notifications")\
        .where(user_id=order["user_id"])\
        .where(type="order_completed")\
        .exists()

    sql_assert.table("audit_log")\
        .where(entity="order")\
        .where(entity_id=1)\
        .has(action="status_change", new_value="completed")
```

## Supported drivers

Works with all drivers supported by `@sql` decorator:

| Driver | Database | Type |
|--------|----------|------|
| `psycopg2` | PostgreSQL | sync |
| `psycopg` | PostgreSQL | async |
| `asyncpg` | PostgreSQL | async |
| `mysql.connector` | MySQL | sync |
| `aiomysql` | MySQL | async |
| `sqlite3` | SQLite | sync |
| `aiosqlite` | SQLite | async |

---

# http

Decorator for recording and replaying HTTP calls in tests.

```python
from fixturify import http, HttpTestConfig
```

## Basic usage

```python
import requests

@http(path="./fixtures/api_calls.json")
def test_api():
    response = requests.get("https://api.example.com/users")
    assert response.status_code == 200
```

**First run**: makes real HTTP calls and saves them to JSON file.

**Subsequent runs**: replays saved responses (no network calls).

## Decorator parameters

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `path` | `str` | required | Path to JSON fixture file (relative to test file) |
| `config` | `HttpTestConfig \| None` | `None` | Config instance (or use fixture) |
| `**kwargs` | | | Per-test `HttpTestConfig` field overrides |

## Supported HTTP libraries

- `requests` (sync)
- `httpx` (sync and async)
- `urllib3` (sync)
- `http.client` (stdlib, sync)
- `aiohttp` (async)
- `httplib2` (sync)
- `tornado` (sync and async)
- `boto3` / `botocore` (AWS SDK)
- `httpcore` (low-level, used by httpx)

## Async functions

```python
import httpx

@http(path="./fixtures/api.json")
async def test_async_api():
    async with httpx.AsyncClient() as client:
        response = await client.get("https://api.example.com/users")
        assert response.status_code == 200
```

## Configuration

### Via pytest fixture (recommended)

```python
# conftest.py
import pytest
from fixturify import HttpTestConfig

@pytest.fixture
def http_config() -> HttpTestConfig:
    return HttpTestConfig(
        ignore_request_headers=["Authorization"],
        exclude_request_headers=["Authorization", "X-API-Key"],
    )
```

Fixture is auto-discovered - no need to pass it to the decorator.

> **Important:** The return type annotation `-> HttpTestConfig` is **required** for auto-discovery. The discovery mechanism matches fixtures by their return type annotation, so without it the fixture will not be found.

### Directly in decorator

```python
config = HttpTestConfig(
    ignore_request_headers=["Authorization"],
    strict_order=True,
)

@http(path="./fixtures/api.json", config=config)
def test_api():
    pass
```

### Per-test overrides (kwargs)

Override or extend config for a single test by passing `HttpTestConfig` fields directly to the decorator:

```python
@pytest.fixture
def http_config() -> HttpTestConfig:
    return HttpTestConfig(redact_response_body=["token"])

# Adds "secret" redaction only for this test
@http(path="./fixtures/api.json", redact_response_body=["secret"])
def test_something():
    ...
# Result: redact_response_body = ["token", "secret"]
```

**Merge semantics:**

- **List fields** (e.g. `redact_response_body`, `ignore_request_headers`): additive - config list + kwargs list
- **Bool fields** (`match_request_body`, `strict_order`): override - kwarg wins over config

```python
# Override strict_order for one test
@http(path="./fixtures/api.json", strict_order=True)
def test_ordered():
    ...

# Combine with explicit config
config = HttpTestConfig(exclude_hosts=["testserver"])

@http(path="./fixtures/api.json", config=config, redact_response_body=["secret"])
def test_combined():
    ...

# Standalone (no fixture, no config=)
@http(path="./fixtures/api.json", ignore_request_headers=["Authorization"], strict_order=True)
def test_standalone():
    ...
```

Invalid kwargs raise `TypeError` at decoration time (fail fast).

## HttpTestConfig

| Field | Type | Default | Description |
|-------|------|---------|-------------|
| `ignore_request_headers` | `List[str]` | `[]` | Headers to ignore when matching requests |
| `ignore_response_headers` | `List[str]` | `[]` | Headers to ignore when comparing responses |
| `exclude_request_headers` | `List[str]` | `[]` | Headers to remove from recordings |
| `exclude_response_headers` | `List[str]` | `[]` | Headers to remove from response recordings |
| `exclude_hosts` | `List[str]` | `[]` | Hosts excluded from recording/playback (always real calls) |
| `match_request_body` | `bool` | `True` | Whether to include request body in matching |
| `strict_order` | `bool` | `True` | Whether requests must occur in recorded order |
| `redact_request_body` | `List[str]` | `[]` | JSON paths to redact in request body |
| `redact_response_body` | `List[str]` | `[]` | JSON paths to redact in response body |
| `update` | `bool` | `False` | Enable update mode (hybrid playback + recording) |

## Update mode

When `update=True` and the fixture file already exists, the decorator runs in **update mode** — a hybrid of playback and recording:

- Requests that **match** an existing recording return the recorded response (no real HTTP call)
- Requests that **don't match** make a real HTTP call and get **appended** to the fixture file
- **No `UnusedRecordingsError`** is raised — existing recordings that weren't used are kept as-is

This is useful when you add new API calls to an existing test without re-recording everything from scratch.

```python
# Via decorator kwarg
@http(path="./fixtures/api.json", update=True)
def test_api():
    # Existing recorded calls return from mock
    # New calls go through and get appended to the fixture
    ...

# Via config
config = HttpTestConfig(update=True)

@http(path="./fixtures/api.json", config=config)
def test_api():
    ...
```

> **Tip:** Once your fixture file is complete, remove `update=True` to go back to strict playback mode. This way your tests stay fast and deterministic.

## Request matching

Requests are matched by:

1. **Method** - case-insensitive (GET, POST, PUT, DELETE, PATCH, etc.)
2. **URL** - scheme + host + path
3. **Query parameters** - order-independent
4. **Headers** - expected headers must be present (extra headers in actual request are OK)
5. **Request body** - if `match_request_body=True`: JSON compared structurally (order-independent), text compared as string

**Strict order mode** (default, `strict_order=True`): requests must match recordings in exact sequence.

**Any order mode** (`strict_order=False`): requests can match any unused recording.

### asyncio.gather and concurrent requests

When using `asyncio.gather` (or similar concurrency patterns), multiple HTTP calls execute concurrently and their completion order is **non-deterministic**. The default `strict_order=True` will cause these tests to fail intermittently because the matcher expects requests in the exact recorded sequence.

For tests with concurrent requests, override `strict_order` to `False`:

```python
@http(path="./fixtures/api.json", strict_order=False)
async def test_concurrent_calls():
    results = await asyncio.gather(
        fetch_users(),
        fetch_orders(),
        fetch_products(),
    )
```

> **Note:** If you have two identical requests (same method, URL, headers, and body) with different responses, the any-order matcher returns the first unused match. With `asyncio.gather`, which response goes to which call becomes non-deterministic. In practice this is rare since concurrent tasks typically call different endpoints or use different parameters.

## Default ignored headers

A large set of headers is automatically ignored during matching and excluded from recordings. This includes:

- Connection/protocol: `host`, `connection`, `content-length`, `transfer-encoding`
- Client: `user-agent`, `accept`, `accept-encoding`, `accept-language`, `cache-control`
- Security/auth: `cookie`, `set-cookie`, `x-csrf-token`
- Tracing: `traceparent`, `tracestate`, `x-request-id`, `x-correlation-id`, `x-trace-id`, `x-b3-*`
- AWS: `x-amz-date`, `x-amz-security-token`, `x-amz-content-sha256`, `amz-sdk-*`
- Proxy/CDN: `forwarded`, `x-forwarded-*`, `via`, `cf-*`
- Rate limiting: `x-ratelimit-*`
- Other: `date`, `server`, `expires`, `vary`, `alt-svc`

## Configuration examples

### Ignoring Authorization header

```python
config = HttpTestConfig(
    ignore_request_headers=["Authorization"],    # don't compare during playback
    exclude_request_headers=["Authorization"],   # don't save to file
)
```

### Disabling strict order

```python
@http(path="./fixtures/api.json", strict_order=False)
def test_any_order():
    # Requests can occur in any order
    requests.get("https://api.example.com/second")
    requests.get("https://api.example.com/first")
```

### Excluding host from mocking

```python
config = HttpTestConfig(
    exclude_hosts=["testserver", "localhost:8000"],
)

@http(path="./fixtures/api.json", config=config)
def test_with_local_server():
    # These calls go to real server:
    requests.get("http://testserver/api/health")

    # These are recorded/replayed:
    requests.get("https://external-api.com/data")
```

### Ignoring request body

```python
config = HttpTestConfig(match_request_body=False)
```

### Redacting sensitive data in body

Redact sensitive fields in request/response JSON bodies using path notation:

```python
config = HttpTestConfig(
    redact_response_body=[
        "access_token",           # root level field
        "user.email",             # nested field
        "users[*].password",      # field in each array element
    ],
    redact_request_body=[
        "credentials.secret",
    ],
)
```

Redacted values are replaced with `"*******"`. Redacted request fields are also ignored during body matching in playback.

**Path syntax:**

| Pattern | Description | Example |
|---------|-------------|---------|
| `"field"` | Root level field | `{"field": "secret"}` → `{"field": "*******"}` |
| `"a.b.c"` | Nested field | `{"a": {"b": {"c": "secret"}}}` → `{"a": {"b": {"c": "*******"}}}` |
| `"items[*].token"` | Field in each array element | `{"items": [{"token": "x"}, {"token": "y"}]}` → all redacted |
| `"a[*].b[*].c"` | Multiple nested arrays | Works recursively through all array levels |

**Example - OAuth token response:**

```python
config = HttpTestConfig(
    redact_response_body=[
        "access_token",
        "refresh_token",
        "id_token",
    ],
)

@http(path="./fixtures/oauth.json", config=config)
def test_oauth():
    response = requests.post("https://oauth.example.com/token", data={...})
    # Tokens are redacted in saved recording
```

## Binary content handling

Binary responses (images, PDFs, archives, etc.) are stored in a `__files/` subdirectory alongside the JSON fixture. Content type is auto-detected and filenames are generated with content hash for deduplication.

Detected binary types: `image/*`, `audio/*`, `video/*`, `application/octet-stream`, `application/pdf`, `application/zip`, `application/gzip`, `font/*`, and more.

## Stacked decorators

Multiple `@http` decorators can be stacked on the same test function.

## Exceptions

| Exception | When |
|-----------|------|
| `NoMatchingRecordingError` | Request doesn't match any recording during playback |
| `UnusedRecordingsError` | Some recordings were not used after test completes |
| `RequestMismatchError` | Details about request mismatch (method, URL, headers, body) |

## Re-recording

Delete the JSON fixture file and run the test again:

```bash
rm tests/fixtures/api_calls.json
pytest tests/test_api.py
```

## Recording file format

```json
{
  "mappings": [
    {
      "request": {
        "method": "GET",
        "url": "https://api.example.com/users",
        "headers": {"Accept": "application/json"},
        "queryParameters": {},
        "body": null
      },
      "response": {
        "status": 200,
        "headers": {"Content-Type": "application/json"},
        "body": [{"id": 1, "name": "John"}]
      }
    }
  ]
}
```

JSON bodies are stored as native JSON objects (not stringified). Binary bodies reference external files via `bodyFileName`.

## Path resolution

Paths are resolved relative to the test file:

```
tests/
  test_api.py            <- @http(path="./fixtures/api.json")
  fixtures/
    api.json             <- recordings saved here
    __files/             <- binary response files
```

---

# read

Decorator for injecting JSON file data into test functions.

```python
from fixturify import read
```

## Basic usage

### As dict

```python
@read.fixture(path="./fixtures/config.json", fixture_name="config")
def test_config(config: dict):
    assert config["timeout"] == 30
    assert config["debug"] is True
```

### As object

```python
from dataclasses import dataclass

@dataclass
class User:
    name: str
    age: int

@read.fixture(path="./fixtures/user.json", fixture_name="user", object_class=User)
def test_user(user: User):
    assert user.name == "John"
    assert user.age == 30
```

## Decorator parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `path` | `str` | yes | Path to JSON file (relative to test file) |
| `fixture_name` | `str` | yes | Name of parameter to inject |
| `object_class` | `Type \| None` | no | Class for deserialization (default: `None` = raw dict/list) |

## List of objects

```python
# fixtures/users.json: [{"name": "John", "age": 30}, {"name": "Jane", "age": 25}]

@read.fixture(path="./fixtures/users.json", fixture_name="users", object_class=User)
def test_users(users: List[User]):
    assert len(users) == 2
    assert users[0].name == "John"
```

## Multiple fixtures

Stack decorators to inject multiple fixtures:

```python
@read.fixture(path="./fixtures/user.json", fixture_name="user", object_class=User)
@read.fixture(path="./fixtures/config.json", fixture_name="config")
def test_multiple(user: User, config: dict):
    assert user.name == "John"
    assert config["debug"] is True
```

No limit on the number of stacked decorators. Duplicate `fixture_name` raises `ValueError` at decoration time.

## Supported object types

Supported types: dataclasses, Pydantic v1/v2, SQLAlchemy, SQLModel, plain Python objects.

## Nested objects

Nested objects are deserialized recursively (up to 100 levels deep):

```python
@dataclass
class Address:
    city: str

@dataclass
class Company:
    name: str
    employees: List[User]

# fixtures/company.json:
# {"name": "Acme", "employees": [{"name": "John", "age": 30}]}

@read.fixture(path="./fixtures/company.json", fixture_name="company", object_class=Company)
def test_nested(company: Company):
    assert company.name == "Acme"
    assert len(company.employees) == 1
```

## Async functions

```python
@read.fixture(path="./fixtures/user.json", fixture_name="user", object_class=User)
async def test_async(user: User):
    assert user.name == "John"
```

Fixture loading is synchronous (happens before async test execution).

## Compatibility

### pytest.mark.parametrize

```python
@pytest.mark.parametrize("expected_age", [30])
@read.fixture(path="./fixtures/user.json", fixture_name="user", object_class=User)
def test_parametrized(user: User, expected_age: int):
    assert user.age == expected_age
```

### Class methods

```python
class TestUser:
    @read.fixture(path="./fixtures/user.json", fixture_name="user", object_class=User)
    def test_method(self, user: User):
        assert user.name == "John"
```

### Default parameters

```python
@read.fixture(path="./fixtures/user.json", fixture_name="user", object_class=User)
def test_with_default(user: User, timeout=30):
    pass
```

## Path resolution

Paths are resolved relative to the test file. Supports `../` navigation:

```
tests/
  test_users.py          <- @read.fixture(path="./fixtures/user.json", ...)
  fixtures/
    user.json            <- this file will be used
  nested/
    test_nested.py       <- @read.fixture(path="../fixtures/user.json", ...)
```

---

# JsonAssert

Compare Python objects to JSON files in tests.

```python
from fixturify import JsonAssert
```

## Basic usage

```python
data = {"name": "John", "age": 30}
JsonAssert(data).compare_to_file("./expected.json")
```

If data doesn't match, `AssertionError` is raised with detailed diff.

## Auto-creation of expected files

If the expected JSON file does not exist, it is **automatically created** with the current data. This lets you generate fixtures on first run and verify them in subsequent runs.

## Supported input types

```python
# dict
JsonAssert({"name": "John"}).compare_to_file("./expected.json")

# dataclass / Pydantic / SQLAlchemy / SQLModel / plain object
user = User(name="John", age=30)
JsonAssert(user).compare_to_file("./expected.json")

# list
users = [{"name": "John"}, {"name": "Jane"}]
JsonAssert(users).compare_to_file("./expected.json")

# JSON string (must be object or array, not primitive)
json_str = '{"name": "John"}'
JsonAssert(json_str).compare_to_file("./expected.json")
```

Objects are serialized via `ObjectMapper` before comparison.

## Ignoring fields

### Simple field

```python
data = {"name": "John", "age": 99}  # age differs
JsonAssert(data).ignore("age").compare_to_file("./expected.json")
```

### Nested field (dot notation)

```python
data = {"user": {"name": "John", "profile": {"age": 99, "city": "NYC"}}}
JsonAssert(data).ignore("user.profile.age").compare_to_file("./expected.json")
```

### Multiple fields

```python
JsonAssert(data).ignore("name", "age").compare_to_file("./expected.json")

# or chaining (calls accumulate)
JsonAssert(data).ignore("name").ignore("age").compare_to_file("./expected.json")
```

### Array elements

```python
data = {"users": [{"name": "John", "id": 999}, {"name": "Jane", "id": 888}]}

# Ignore field in every array element
JsonAssert(data).ignore("users[*].id").compare_to_file("./expected.json")

# Ignore field in specific array element
JsonAssert(data).ignore("users[0].id").compare_to_file("./expected.json")
```

### Deeply nested

```python
data = {
    "company": {
        "departments": [
            {"name": "Engineering", "employees": [{"name": "John", "salary": 100000}]}
        ]
    }
}

JsonAssert(data).ignore("company.departments[*].employees[*].salary").compare_to_file("./expected.json")
```

## Comparison options

```python
JsonAssert(data).options(
    ignore_order=True,                          # ignore order in lists
    numeric_tolerance=0.001,                    # tolerance for floats
    ignore_type_in_groups=[(int, float)],       # treat int and float as equal
).compare_to_file("./expected.json")
```

### ignore_order

```python
# Element order doesn't matter
data = {"tags": ["b", "a", "c"]}
# expected.json: {"tags": ["a", "b", "c"]}
JsonAssert(data).options(ignore_order=True).compare_to_file("./expected.json")
```

### numeric_tolerance

```python
data = {"value": 3.14159}
# expected.json: {"value": 3.14}
JsonAssert(data).options(numeric_tolerance=0.01).compare_to_file("./expected.json")
```

Does not apply to integers.

### ignore_type_in_groups

```python
# Treat int and float as same type (1 == 1.0)
JsonAssert(data).options(ignore_type_in_groups=[(int, float)]).compare_to_file("./expected.json")
```

## Combining methods

All methods return `self` for chaining:

```python
JsonAssert(data)\
    .ignore("id", "created_at")\
    .options(ignore_order=True, numeric_tolerance=0.001)\
    .compare_to_file("./expected.json")
```

## ACTUAL output

When comparison fails, actual result is saved to `ACTUAL/` folder next to expected file for debugging:

```
tests/
  fixtures/
    expected.json
    ACTUAL/
      expected.json    <- actual result saved here
```

## Error message format

On failure, a formatted diff table is printed to stderr and included in `AssertionError`:

```
================================================================================
                           JSON COMPARISON FAILED
================================================================================

Expected file: ./expected.json
Actual saved:  ./ACTUAL/expected.json

CHANGED VALUES:
  root['name']: 'John' -> 'Jane'
  root['age']: 30 -> 25

EXTRA IN ACTUAL (found but not expected):
  root['new_field']: 'value'

MISSING IN ACTUAL (expected but not found):
  root['old_field']: 'value'
================================================================================
```

Difference types reported: changed values, type changes, extra in actual, missing in actual. Long values are truncated to 50 characters in the table.

## Full example

```python
from dataclasses import dataclass
from fixturify import JsonAssert

@dataclass
class User:
    id: int
    name: str
    created_at: str

def test_user_response():
    user = User(id=123, name="John", created_at="2024-01-15T10:30:00Z")

    JsonAssert(user)\
        .ignore("id", "created_at")\
        .compare_to_file("./expected_user.json")
```

## Path resolution

Paths are resolved relative to the file calling `compare_to_file()`:

```
tests/
  test_users.py          <- JsonAssert(data).compare_to_file("./fixtures/expected.json")
  fixtures/
    expected.json        <- this file is used
```

---

# ObjectMapper

Bidirectional object-to-JSON mapping.

```python
from fixturify import ObjectMapper
```

## Serialization (object to JSON)

### to_json()

Returns JSON-compatible dict/list:

```python
from dataclasses import dataclass

@dataclass
class User:
    name: str
    age: int

user = User(name="John", age=30)
data = ObjectMapper(user).to_json()
# {"name": "John", "age": 30}
```

### to_json_string()

Returns JSON string:

```python
json_str = ObjectMapper(user).to_json_string()
# '{"name": "John", "age": 30}'

# With indentation
json_str = ObjectMapper(user).to_json_string(indent=2)
```

Uses UTF-8 encoding with `ensure_ascii=False`.

## Deserialization (JSON to object)

### to_object()

```python
data = {"name": "John", "age": 30}
user = ObjectMapper(data).to_object(User)
# User(name="John", age=30)
```

### From JSON string

```python
json_str = '{"name": "John", "age": 30}'
user = ObjectMapper(json_str).to_object(User)
```

### List of objects

```python
data = [{"name": "John", "age": 30}, {"name": "Jane", "age": 25}]
users = ObjectMapper(data).to_object(User)
# [User(name="John", age=30), User(name="Jane", age=25)]
```

## Supported types

### dataclasses

```python
from dataclasses import dataclass

@dataclass
class User:
    name: str
    age: int

user = User(name="John", age=30)
ObjectMapper(user).to_json()  # {"name": "John", "age": 30}
```

### Pydantic v1/v2

```python
from pydantic import BaseModel

class User(BaseModel):
    name: str
    age: int

user = User(name="John", age=30)
ObjectMapper(user).to_json()
```

Both Pydantic v1 and v2 are detected automatically.

### SQLAlchemy

```python
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column

class Base(DeclarativeBase):
    pass

class User(Base):
    __tablename__ = "users"
    id: Mapped[int] = mapped_column(primary_key=True)
    name: Mapped[str]

user = User(id=1, name="John")
ObjectMapper(user).to_json()  # {"id": 1, "name": "John"}
```

Only loaded relationships are serialized (no lazy loading triggered). During deserialization, only column names from the mapper are used.

### SQLModel

```python
from sqlmodel import SQLModel, Field

class User(SQLModel, table=True):
    id: int = Field(primary_key=True)
    name: str

user = User(id=1, name="John")
ObjectMapper(user).to_json()
```

Both table and non-table SQLModel classes are supported.

### Plain Python objects

```python
class User:
    def __init__(self, name: str, age: int):
        self.name = name
        self.age = age

user = User(name="John", age=30)
ObjectMapper(user).to_json()  # {"name": "John", "age": 30}
```

## Nested objects

```python
@dataclass
class Address:
    city: str
    country: str

@dataclass
class User:
    name: str
    address: Address

user = User(name="John", address=Address(city="NYC", country="USA"))
ObjectMapper(user).to_json()
# {"name": "John", "address": {"city": "NYC", "country": "USA"}}
```

Nested objects are handled recursively with shared state for circular reference tracking.

## Lists and collections

```python
@dataclass
class Company:
    name: str
    employees: List[User]

company = Company(
    name="Acme",
    employees=[User(name="John", age=30), User(name="Jane", age=25)]
)
ObjectMapper(company).to_json()
# {"name": "Acme", "employees": [{"name": "John", "age": 30}, ...]}
```

**Supported collections:**

| Type | Serialized as | Deserialized to |
|------|--------------|-----------------|
| `list` | list | list |
| `tuple` | list | tuple |
| `set` / `frozenset` | sorted list (deterministic) | set |
| `dict` | dict | dict |

## Special types

### Enums

```python
from enum import Enum

class Status(Enum):
    ACTIVE = "active"
    INACTIVE = "inactive"

@dataclass
class User:
    name: str
    status: Status

user = User(name="John", status=Status.ACTIVE)
ObjectMapper(user).to_json()
# {"name": "John", "status": "active"}

# Deserialization
data = {"name": "John", "status": "active"}
user = ObjectMapper(data).to_object(User)
# user.status == Status.ACTIVE
```

Serializes via `.value`, deserializes via `EnumClass(value)`.

### datetime / date / time

```python
from datetime import datetime, date, time

@dataclass
class Event:
    name: str
    created_at: datetime
    day: date
    start: time

event = Event(
    name="Meeting",
    created_at=datetime(2024, 1, 15, 10, 30),
    day=date(2024, 1, 15),
    start=time(10, 30),
)
ObjectMapper(event).to_json()
# {"name": "Meeting", "created_at": "2024-01-15T10:30:00", "day": "2024-01-15", "start": "10:30:00"}
```

Uses ISO 8601 format (`isoformat()` / `fromisoformat()`).

### UUID

```python
from uuid import UUID

@dataclass
class User:
    id: UUID
    name: str

user = User(id=UUID("12345678-1234-5678-1234-567812345678"), name="John")
ObjectMapper(user).to_json()
# {"id": "12345678-1234-5678-1234-567812345678", "name": "John"}
```

### bytes

Serialized to UTF-8 string (decode errors replaced). Not converted back during deserialization.

## Optional fields

```python
@dataclass
class User:
    name: str
    email: Optional[str] = None

user = User(name="John", email=None)
ObjectMapper(user).to_json()
# {"name": "John", "email": null}
```

`Optional[T]` (Union with None) is handled correctly during deserialization.

## Circular references

ObjectMapper handles circular references using JSON `$ref` notation:

```python
@dataclass
class Node:
    value: int
    next: Optional["Node"] = None

a = Node(value=1)
b = Node(value=2)
a.next = b
b.next = a  # circular reference

ObjectMapper(a).to_json()
# {"value": 1, "next": {"value": 2, "next": {"$ref": "#"}}}
```

Shared objects (diamond pattern) also use `$ref` to avoid duplication:

```python
shared = SharedData(id=1)
root.a.data = shared  # fully serialized
root.b.data = shared  # {"$ref": "#/a/data"}
```

## Depth protection

Maximum serialization depth is 100 levels. Exceeding this raises `ValueError`:

```
ValueError: Maximum serialization depth (100) exceeded at path: ...
```

## Error handling

| Error | When | Exception |
|-------|------|-----------|
| Serialization failure | Object can't be converted to JSON | `ValueError` |
| Deserialization failure | JSON can't be converted to target class | `ValueError` |
| Unsupported target class | Target type not recognized | `TypeError` |
| Max depth exceeded | Nesting deeper than 100 levels | `ValueError` |

---

# Combined usage

## Full example

A test that uses all modules together:

```python
from dataclasses import dataclass
from fixturify import sql, http, read, Phase, SqlAssert, JsonAssert, ObjectMapper, HttpTestConfig

@dataclass
class User:
    id: int
    name: str
    email: str

@dataclass
class ApiResponse:
    user: User
    token: str


# conftest.py fixtures (shared across tests):
#
# @pytest.fixture
# def sql_config() -> SqlTestConfig:
#     return SqlTestConfig(driver="psycopg2", host="localhost", database="testdb", user="postgres", password="postgres")
#
# @pytest.fixture
# def http_config() -> HttpTestConfig:
#     return HttpTestConfig(exclude_hosts=["testserver"], redact_response_body=["token"])


@sql(path="./fixtures/setup_users.sql")
@sql(path="./fixtures/cleanup.sql", phase=Phase.AFTER)
@http(path="./fixtures/external_api.json", redact_response_body=["session_id"])
@read.fixture(path="./fixtures/expected_user.json", fixture_name="expected_user", object_class=User)
def test_user_sync_flow(sql_assert: SqlAssert, expected_user: User):
    # 1. Verify initial DB state (SqlAssert)
    sql_assert.table("users").count(0)

    # 2. Call external API - recorded/replayed (http)
    import requests
    response = requests.post("https://api.example.com/users", json={"name": "John"})
    api_data = response.json()

    # 3. Map response to object (ObjectMapper)
    api_response = ObjectMapper(api_data).to_object(ApiResponse)
    assert api_response.user.name == "John"

    # 4. Save user to DB, then verify (SqlAssert)
    save_user_to_db(api_response.user)

    sql_assert.table("users").where(name="John").exists()
    sql_assert.table("users").where(name="John").has(email=expected_user.email)

    db_user = sql_assert.table("users").where(name="John").fetch_one(User)

    # 5. Compare DB user to expected fixture (JsonAssert)
    JsonAssert(db_user).ignore("id").compare_to_file("./fixtures/expected_db_user.json")
```

## Decorator stacking

Decorators can be freely combined on a single test function. The order matters:

```python
@sql(path="./setup.sql")                    # 1st: executes SQL before test
@sql(path="./more_data.sql")                # 2nd: executes SQL before test
@sql(path="./cleanup.sql", phase=Phase.AFTER)  # runs after test (even on failure)
@http(path="./fixtures/api.json")           # 3rd: activates HTTP recording/playback
@read.fixture(path="./fixtures/user.json", fixture_name="user", object_class=User)  # 4th: loads fixture
def test_function(sql_assert: SqlAssert, user: User):
    ...
```

### Rules

- `@sql` decorators execute top-to-bottom for `BEFORE`, and run in `finally` for `AFTER`
- `@http` sets up HTTP mocking for the duration of the test
- `@read.fixture` injects data as function parameters - the parameter is removed from the signature so pytest doesn't try to resolve it as a pytest fixture
- `sql_assert` is a pytest fixture - just add it to the function signature, no decorator needed
- `JsonAssert` and `ObjectMapper` are regular classes - use them inside the test body, no decorator needed
- `sql_config` and `http_config` are pytest fixtures defined in `conftest.py` - auto-discovered by `@sql`, `@http`, and `sql_assert`
- Multiple `@sql` decorators can be stacked (any number, mix of BEFORE/AFTER)
- Multiple `@read.fixture` decorators can be stacked (each must have a unique `fixture_name`)
- Multiple `@http` decorators can be stacked
