Metadata-Version: 2.4
Name: utterance-core
Version: 0.0.1
Summary: Client-side semantic endpointing. Know when they're done talking.
Project-URL: Homepage, https://utterance.dev
Project-URL: Source, https://github.com/nizh0/Utterance
Project-URL: Bug Tracker, https://github.com/nizh0/Utterance/issues
Author-email: Utterance Contributors <hello@utterance.dev>
License: MIT
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Multimedia :: Sound/Audio :: Speech
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.9
Requires-Dist: numpy
Requires-Dist: onnxruntime>=1.15.0
Requires-Dist: platformdirs
Requires-Dist: requests
Description-Content-Type: text/markdown

# Utterance Python SDK

Client-side semantic endpointing. Know when they're done talking.

This is the Python SDK for the Utterance model. It provides a simple interface to load the ONNX model and run inference for VAD/endpointing.

## Installation

```bash
pip install utterance
```

(Note: This package is not yet published to PyPI. Install locally or use `pip install -e .`)

## Usage

```python
from utterance import Utterance
import numpy as np

# Initialize detector (downloads model automatically)
detector = Utterance()

# Create dummy features (replace with real MFCCs/features)
# Example shape: (1, 100, 40) - batch, time, features
dummy_features = np.random.randn(1, 100, 40).astype(np.float32)

# Run inference
result = detector.predict(dummy_features)

print("Speaking:", result["speaking"])
print("Thinking Pause:", result["thinking_pause"])
print("Turn Complete:", result["turn_complete"])
```

## Contributing

See main repository CONTRIBUTING.md.
