Metadata-Version: 2.4
Name: polyollama
Version: 0.1.0
Summary: Run multiple Ollama servers in parallel for concurrent LLM inference
Project-URL: Homepage, https://github.com/okanyenigun/polyollama
Project-URL: Repository, https://github.com/okanyenigun/polyollama
License: MIT
Keywords: inference,langchain,llm,ollama,parallel
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.12
Requires-Dist: langchain-ollama>=0.2.0
Requires-Dist: langchain>=0.3.0
Description-Content-Type: text/markdown

# polyollama

Run multiple [Ollama](https://ollama.com) servers in parallel to maximize LLM inference throughput on a single machine.

## Installation

```bash
pip install polyollama
```

Requires [Ollama](https://ollama.com/download) on your `PATH`.

## License

MIT
