Metadata-Version: 2.3
Name: waterfall-log
Version: 0.1.0
Summary: Request waterfall tracing for Starlette-compatible ASGI applications
Keywords: asgi,fastapi,observability,profiling,starlette
Requires-Python: >=3.11,<4.0
Classifier: Development Status :: 3 - Alpha
Classifier: Framework :: AsyncIO
Classifier: Framework :: FastAPI
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: System :: Logging
Description-Content-Type: text/markdown

# waterfall-log

`waterfall-log` is a small Python library for Starlette-compatible applications that prints a request waterfall to the console after every HTTP request.

It is designed for FastAPI and other ASGI apps built on Starlette, and it focuses on two things:

- capturing the Python call tree for one request
- making the slowest parts obvious in the console output

## What it does

For each HTTP request, the middleware:

- profiles Python function calls in the active request task
- builds a nested call tree with timestamps
- prints a waterfall-style timeline to the configured output stream
- reports the hottest frames by inclusive and self time

## Install

```bash
poetry install --with dev,demo
```

Install the library into another project from a built artifact or directly from PyPI later with:

```bash
pip install waterfall-log
```

## Quick start

```python
from fastapi import FastAPI

from waterfall_log import WaterfallMiddleware

app = FastAPI()
app.add_middleware(WaterfallMiddleware)


@app.get("/hello")
async def hello() -> dict[str, str]:
    return {"message": "hello"}
```

Run the sample app:

```bash
poetry run uvicorn sample_app:app --reload
```

Then call:

```bash
curl http://127.0.0.1:8000/report/42
```

Example output:

```text
Request 200 GET /report/42 took 86.54 ms
Hotspots
  38.12 ms total | 36.89 ms self  sample_app.py:24 load_line_items
  21.07 ms total | 20.81 ms self  sample_app.py:36 render_summary
Waterfall
    0.00 ms |############################################################|   86.54 ms 100.0% GET /report/42
    1.14 ms | ###                                                        |    4.93 ms   5.7% sample_app.py:51 compute_discount
    7.03 ms |     ##########################                             |   38.12 ms  44.0% sample_app.py:24 load_line_items  <<< hottest
   49.82 ms |                                  ###############          |   21.07 ms  24.3% sample_app.py:36 render_summary
```

## Notes

- The profiler automatically isolates the active asyncio task, so overlapping requests handled on the same event loop do not share one trace.
- Work executed in background threads or native extensions is not profiled directly. Time spent there is still visible in the waiting parent frame.
- The middleware only traces HTTP requests. WebSocket and lifespan scopes pass through unchanged.

## Poetry workflow

Install dependencies for local work:

```bash
poetry install --with dev,demo
```

Run tests:

```bash
poetry run pytest
```

Build publishable artifacts:

```bash
poetry build
```

Check package metadata:

```bash
poetry check
```

Publish to PyPI:

```bash
poetry config pypi-token.pypi <token>
poetry publish --build
```

If you want to publish to TestPyPI first:

```bash
poetry config repositories.testpypi https://test.pypi.org/legacy/
poetry publish --build --repository testpypi
```

## Files

- `src/waterfall_log`: library package
- `sample_app.py`: runnable FastAPI demo
- `tests/test_middleware.py`: smoke test for middleware output

