Metadata-Version: 2.4
Name: py-smarttimer
Version: 0.1.0
Summary: Tiny, friendly timing utilities for Python code blocks and functions.
Author-email: Nipun Sujesh <your.email@example.com>
License: MIT
Project-URL: Homepage, https://github.com/Luc0-0/smarttimer
Project-URL: Source, https://github.com/Luc0-0/smarttimer
Project-URL: Bug-Reports, https://github.com/Luc0-0/smarttimer/issues
Project-URL: Documentation, https://github.com/Luc0-0/smarttimer#readme
Keywords: timer,benchmark,profiling,performance,timing,measure
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: System :: Benchmark
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Requires-Python: >=3.8
Description-Content-Type: text/markdown
License-File: LICENSE
Provides-Extra: memory
Requires-Dist: psutil>=5.0.0; extra == "memory"
Dynamic: license-file

# ⏱️ smarttimer

[![PyPI version](https://badge.fury.io/py/smarttimer.svg)](https://badge.fury.io/py/smarttimer)
[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

Tiny, friendly timing utilities for Python code blocks and functions. Perfect for quick performance checks and micro-benchmarks.

## 🚀 Install

```bash
pip install py-smarttimer
```

## 📖 Quick Start

```python
from smarttimer import time_block, benchmark, measure, compare
```

## 🎯 Features

### ⏲️ Time any code block

```python
from smarttimer import time_block

with time_block("data processing"):
    df = pd.read_csv("large_file.csv")
    result = df.groupby("category").sum()
```

```
[smarttimer] data processing took 2.3451s
```

### 🎪 Benchmark functions with a decorator

```python
from smarttimer import benchmark

@benchmark
def fibonacci(n):
    if n < 2:
        return n
    return fibonacci(n-1) + fibonacci(n-2)

result = fibonacci(30)  # Automatically prints timing
```

### 📊 Measure with statistics

```python
from smarttimer import measure

def matrix_multiply(a, b):
    return [[sum(x*y for x,y in zip(row,col)) for col in zip(*b)] for row in a]

# Run 10 times with 2 warmup runs
result, elapsed = measure(
    matrix_multiply,
    [[1,2],[3,4]], [[5,6],[7,8]],
    repeats=10,
    warmup=2
)
```

### 🏁 Compare multiple functions

```python
from smarttimer import compare

def bubble_sort(arr):
    n = len(arr)
    for i in range(n):
        for j in range(0, n-i-1):
            if arr[j] > arr[j+1]:
                arr[j], arr[j+1] = arr[j+1], arr[j]
    return arr

def quick_sort(arr):
    if len(arr) <= 1:
        return arr
    pivot = arr[len(arr) // 2]
    left = [x for x in arr if x < pivot]
    middle = [x for x in arr if x == pivot]
    right = [x for x in arr if x > pivot]
    return quick_sort(left) + middle + quick_sort(right)

# Compare performance
data = [64, 34, 25, 12, 22, 11, 90]
compare(bubble_sort, quick_sort, args=(data.copy(),), repeats=100)
```

```
[smarttimer] Function comparison (100 runs):
  quick_sort: 0.000012s ± 0.000003s (fastest)
  bubble_sort: 0.000089s ± 0.000012s (7.4x slower)
```

### 🔍 Silent timing for custom logic

```python
from smarttimer import TimingContext

with TimingContext() as timer:
    expensive_computation()

if timer.elapsed > 1.0:
    print(f"Slow operation detected: {timer.elapsed:.2f}s")
```

### 💾 Memory profiling (optional)

```python
from smarttimer import profile_memory

@profile_memory
def load_large_dataset():
    return [i**2 for i in range(1_000_000)]

data = load_large_dataset()
```

```
[smarttimer] load_large_dataset took 0.1234s, memory: 45.2MB → 82.1MB (+36.9MB)
```

_Requires `pip install psutil` for memory profiling_

## 🛠️ Advanced Usage

### Disable timing conditionally

```python
DEBUG = False

with time_block("debug operation", enabled=DEBUG):
    debug_heavy_computation()  # Only timed when DEBUG=True
```

### Custom output and precision

```python
import sys
from smarttimer import benchmark

@benchmark(precision=6, output=sys.stderr)
def precise_operation():
    return sum(i**0.5 for i in range(10000))
```

### Warmup runs for accurate benchmarks

```python
# Skip first 3 runs to avoid cold start effects
result, time_taken = measure(
    compiled_function,
    args,
    repeats=20,
    warmup=3
)
```

## 🎨 Why smarttimer?

- **Zero dependencies** (except optional `psutil` for memory profiling)
- **Minimal overhead** - uses `time.perf_counter()` for precision
- **Flexible** - works as context manager, decorator, or function
- **Clean output** - consistent, readable timing reports
- **Production ready** - disable timing in production with `enabled=False`

## 📦 API Reference

| Function                              | Purpose                | Returns                 |
| ------------------------------------- | ---------------------- | ----------------------- |
| `time_block(name)`                    | Time a code block      | Context manager         |
| `@benchmark`                          | Time a function call   | Decorated function      |
| `measure(func, *args, repeats=1)`     | Benchmark with repeats | `(result, elapsed)`     |
| `compare(*funcs, args=(), repeats=5)` | Compare functions      | Statistics dict         |
| `TimingContext()`                     | Silent timing          | Context with `.elapsed` |
| `@profile_memory`                     | Time + memory usage    | Decorated function      |

## 🤝 Contributing

Found a bug? Want a feature? [Open an issue](https://github.com/Luc0-0/smarttimer/issues) or submit a PR!

## 📄 License

MIT License - see [LICENSE](LICENSE) file for details.

---
