Metadata-Version: 2.4
Name: abstractcore
Version: 2.11.6
Summary: Unified interface to all LLM providers with essential infrastructure for tool calling, streaming, and model management
Author-email: Laurent-Philippe Albou <contact@abstractcore.ai>
Maintainer-email: Laurent-Philippe Albou <contact@abstractcore.ai>
License: MIT
Project-URL: Homepage, https://lpalbou.github.io/AbstractCore
Project-URL: Documentation, https://github.com/lpalbou/AbstractCore#readme
Project-URL: Repository, https://github.com/lpalbou/AbstractCore
Project-URL: Bug Tracker, https://github.com/lpalbou/AbstractCore/issues
Project-URL: Changelog, https://github.com/lpalbou/AbstractCore/blob/main/CHANGELOG.md
Keywords: llm,openai,anthropic,ollama,lmstudio,huggingface,mlx,ai,machine-learning,natural-language-processing,tool-calling,streaming
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Internet :: WWW/HTTP :: HTTP Servers
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: pydantic<3.0.0,>=2.0.0
Requires-Dist: httpx<1.0.0,>=0.24.0
Provides-Extra: openai
Requires-Dist: openai<2.0.0,>=1.0.0; extra == "openai"
Provides-Extra: anthropic
Requires-Dist: anthropic<1.0.0,>=0.25.0; extra == "anthropic"
Provides-Extra: ollama
Provides-Extra: lmstudio
Provides-Extra: huggingface
Requires-Dist: transformers<6.0.0,>=4.57.1; extra == "huggingface"
Requires-Dist: torch<3.0.0,>=2.6.0; extra == "huggingface"
Requires-Dist: torchvision>=0.17.0; extra == "huggingface"
Requires-Dist: torchaudio>=2.1.0; extra == "huggingface"
Requires-Dist: llama-cpp-python<1.0.0,>=0.2.0; extra == "huggingface"
Requires-Dist: outlines>=0.1.0; extra == "huggingface"
Provides-Extra: mlx
Requires-Dist: mlx<1.0.0,>=0.30.0; extra == "mlx"
Requires-Dist: mlx-lm<1.0.0,>=0.30.0; extra == "mlx"
Requires-Dist: outlines>=0.1.0; extra == "mlx"
Provides-Extra: mlx-bench
Requires-Dist: matplotlib<4.0.0,>=3.8.0; extra == "mlx-bench"
Provides-Extra: vllm
Requires-Dist: vllm<1.0.0,>=0.6.0; extra == "vllm"
Provides-Extra: embeddings
Requires-Dist: sentence-transformers<6.0.0,>=5.1.0; extra == "embeddings"
Requires-Dist: numpy<2.0.0,>=1.20.0; extra == "embeddings"
Provides-Extra: tokens
Requires-Dist: tiktoken<1.0.0,>=0.5.0; extra == "tokens"
Provides-Extra: tools
Requires-Dist: requests<3.0.0,>=2.25.0; extra == "tools"
Requires-Dist: beautifulsoup4<5.0.0,>=4.12.0; extra == "tools"
Requires-Dist: lxml<6.0.0,>=4.9.0; extra == "tools"
Requires-Dist: ddgs<10.0.0,>=9.10.0; python_version >= "3.10" and extra == "tools"
Requires-Dist: duckduckgo-search<4.0.0,>=3.8.0; python_version < "3.10" and extra == "tools"
Requires-Dist: psutil<6.0.0,>=5.9.0; extra == "tools"
Provides-Extra: tool
Requires-Dist: requests<3.0.0,>=2.25.0; extra == "tool"
Requires-Dist: beautifulsoup4<5.0.0,>=4.12.0; extra == "tool"
Requires-Dist: lxml<6.0.0,>=4.9.0; extra == "tool"
Requires-Dist: ddgs<10.0.0,>=9.10.0; python_version >= "3.10" and extra == "tool"
Requires-Dist: duckduckgo-search<4.0.0,>=3.8.0; python_version < "3.10" and extra == "tool"
Requires-Dist: psutil<6.0.0,>=5.9.0; extra == "tool"
Provides-Extra: media
Requires-Dist: Pillow<12.0.0,>=10.0.0; extra == "media"
Requires-Dist: pymupdf4llm<1.0.0,>=0.0.20; extra == "media"
Requires-Dist: pymupdf-layout<2.0.0,>=1.26.6; extra == "media"
Requires-Dist: unstructured[docx,odt,pptx,rtf,xlsx]<1.0.0,>=0.10.0; extra == "media"
Requires-Dist: pandas<3.0.0,>=1.0.0; extra == "media"
Provides-Extra: compression
Requires-Dist: Pillow<12.0.0,>=10.0.0; extra == "compression"
Provides-Extra: all
Requires-Dist: openai<2.0.0,>=1.0.0; extra == "all"
Requires-Dist: anthropic<1.0.0,>=0.25.0; extra == "all"
Requires-Dist: transformers<6.0.0,>=4.57.1; extra == "all"
Requires-Dist: torch<3.0.0,>=2.6.0; extra == "all"
Requires-Dist: torchvision>=0.17.0; extra == "all"
Requires-Dist: torchaudio>=2.1.0; extra == "all"
Requires-Dist: llama-cpp-python<1.0.0,>=0.2.0; extra == "all"
Requires-Dist: outlines>=0.1.0; extra == "all"
Requires-Dist: mlx<1.0.0,>=0.30.0; extra == "all"
Requires-Dist: mlx-lm<1.0.0,>=0.30.0; extra == "all"
Requires-Dist: vllm<1.0.0,>=0.6.0; extra == "all"
Requires-Dist: sentence-transformers<6.0.0,>=5.1.0; extra == "all"
Requires-Dist: numpy<2.0.0,>=1.20.0; extra == "all"
Requires-Dist: tiktoken<1.0.0,>=0.5.0; extra == "all"
Requires-Dist: requests<3.0.0,>=2.25.0; extra == "all"
Requires-Dist: beautifulsoup4<5.0.0,>=4.12.0; extra == "all"
Requires-Dist: lxml<6.0.0,>=4.9.0; extra == "all"
Requires-Dist: ddgs<10.0.0,>=9.10.0; python_version >= "3.10" and extra == "all"
Requires-Dist: duckduckgo-search<4.0.0,>=3.8.0; python_version < "3.10" and extra == "all"
Requires-Dist: psutil<6.0.0,>=5.9.0; extra == "all"
Requires-Dist: Pillow<12.0.0,>=10.0.0; extra == "all"
Requires-Dist: pymupdf4llm<1.0.0,>=0.0.20; extra == "all"
Requires-Dist: pymupdf-layout<2.0.0,>=1.26.6; extra == "all"
Requires-Dist: unstructured[docx,odt,pptx,rtf,xlsx]<1.0.0,>=0.10.0; extra == "all"
Requires-Dist: pandas<3.0.0,>=1.0.0; extra == "all"
Requires-Dist: fastapi<1.0.0,>=0.100.0; extra == "all"
Requires-Dist: uvicorn[standard]<1.0.0,>=0.23.0; extra == "all"
Requires-Dist: python-multipart<1.0.0,>=0.0.6; extra == "all"
Requires-Dist: sse-starlette<2.0.0,>=1.6.0; extra == "all"
Requires-Dist: abstractvision>=0.2.0; extra == "all"
Provides-Extra: all-apple
Requires-Dist: openai<2.0.0,>=1.0.0; extra == "all-apple"
Requires-Dist: anthropic<1.0.0,>=0.25.0; extra == "all-apple"
Requires-Dist: transformers<6.0.0,>=4.57.1; extra == "all-apple"
Requires-Dist: torch<3.0.0,>=2.6.0; extra == "all-apple"
Requires-Dist: torchvision>=0.17.0; extra == "all-apple"
Requires-Dist: torchaudio>=2.1.0; extra == "all-apple"
Requires-Dist: llama-cpp-python<1.0.0,>=0.2.0; extra == "all-apple"
Requires-Dist: outlines>=0.1.0; extra == "all-apple"
Requires-Dist: mlx<1.0.0,>=0.30.0; extra == "all-apple"
Requires-Dist: mlx-lm<1.0.0,>=0.30.0; extra == "all-apple"
Requires-Dist: sentence-transformers<6.0.0,>=5.1.0; extra == "all-apple"
Requires-Dist: numpy<2.0.0,>=1.20.0; extra == "all-apple"
Requires-Dist: tiktoken<1.0.0,>=0.5.0; extra == "all-apple"
Requires-Dist: requests<3.0.0,>=2.25.0; extra == "all-apple"
Requires-Dist: beautifulsoup4<5.0.0,>=4.12.0; extra == "all-apple"
Requires-Dist: lxml<6.0.0,>=4.9.0; extra == "all-apple"
Requires-Dist: ddgs<10.0.0,>=9.10.0; python_version >= "3.10" and extra == "all-apple"
Requires-Dist: duckduckgo-search<4.0.0,>=3.8.0; python_version < "3.10" and extra == "all-apple"
Requires-Dist: psutil<6.0.0,>=5.9.0; extra == "all-apple"
Requires-Dist: Pillow<12.0.0,>=10.0.0; extra == "all-apple"
Requires-Dist: pymupdf4llm<1.0.0,>=0.0.20; extra == "all-apple"
Requires-Dist: pymupdf-layout<2.0.0,>=1.26.6; extra == "all-apple"
Requires-Dist: unstructured[docx,odt,pptx,rtf,xlsx]<1.0.0,>=0.10.0; extra == "all-apple"
Requires-Dist: pandas<3.0.0,>=1.0.0; extra == "all-apple"
Requires-Dist: fastapi<1.0.0,>=0.100.0; extra == "all-apple"
Requires-Dist: uvicorn[standard]<1.0.0,>=0.23.0; extra == "all-apple"
Requires-Dist: python-multipart<1.0.0,>=0.0.6; extra == "all-apple"
Requires-Dist: sse-starlette<2.0.0,>=1.6.0; extra == "all-apple"
Requires-Dist: abstractvision>=0.2.0; extra == "all-apple"
Provides-Extra: all-gpu
Requires-Dist: openai<2.0.0,>=1.0.0; extra == "all-gpu"
Requires-Dist: anthropic<1.0.0,>=0.25.0; extra == "all-gpu"
Requires-Dist: transformers<6.0.0,>=4.57.1; extra == "all-gpu"
Requires-Dist: torch<3.0.0,>=2.6.0; extra == "all-gpu"
Requires-Dist: torchvision>=0.17.0; extra == "all-gpu"
Requires-Dist: torchaudio>=2.1.0; extra == "all-gpu"
Requires-Dist: llama-cpp-python<1.0.0,>=0.2.0; extra == "all-gpu"
Requires-Dist: outlines>=0.1.0; extra == "all-gpu"
Requires-Dist: vllm<1.0.0,>=0.6.0; extra == "all-gpu"
Requires-Dist: sentence-transformers<6.0.0,>=5.1.0; extra == "all-gpu"
Requires-Dist: numpy<2.0.0,>=1.20.0; extra == "all-gpu"
Requires-Dist: tiktoken<1.0.0,>=0.5.0; extra == "all-gpu"
Requires-Dist: requests<3.0.0,>=2.25.0; extra == "all-gpu"
Requires-Dist: beautifulsoup4<5.0.0,>=4.12.0; extra == "all-gpu"
Requires-Dist: lxml<6.0.0,>=4.9.0; extra == "all-gpu"
Requires-Dist: ddgs<10.0.0,>=9.10.0; python_version >= "3.10" and extra == "all-gpu"
Requires-Dist: duckduckgo-search<4.0.0,>=3.8.0; python_version < "3.10" and extra == "all-gpu"
Requires-Dist: psutil<6.0.0,>=5.9.0; extra == "all-gpu"
Requires-Dist: Pillow<12.0.0,>=10.0.0; extra == "all-gpu"
Requires-Dist: pymupdf4llm<1.0.0,>=0.0.20; extra == "all-gpu"
Requires-Dist: pymupdf-layout<2.0.0,>=1.26.6; extra == "all-gpu"
Requires-Dist: unstructured[docx,odt,pptx,rtf,xlsx]<1.0.0,>=0.10.0; extra == "all-gpu"
Requires-Dist: pandas<3.0.0,>=1.0.0; extra == "all-gpu"
Requires-Dist: fastapi<1.0.0,>=0.100.0; extra == "all-gpu"
Requires-Dist: uvicorn[standard]<1.0.0,>=0.23.0; extra == "all-gpu"
Requires-Dist: python-multipart<1.0.0,>=0.0.6; extra == "all-gpu"
Requires-Dist: sse-starlette<2.0.0,>=1.6.0; extra == "all-gpu"
Requires-Dist: abstractvision>=0.2.0; extra == "all-gpu"
Provides-Extra: all-non-mlx
Requires-Dist: openai<2.0.0,>=1.0.0; extra == "all-non-mlx"
Requires-Dist: anthropic<1.0.0,>=0.25.0; extra == "all-non-mlx"
Requires-Dist: transformers<6.0.0,>=4.57.1; extra == "all-non-mlx"
Requires-Dist: torch<3.0.0,>=2.6.0; extra == "all-non-mlx"
Requires-Dist: torchvision>=0.17.0; extra == "all-non-mlx"
Requires-Dist: torchaudio>=2.1.0; extra == "all-non-mlx"
Requires-Dist: llama-cpp-python<1.0.0,>=0.2.0; extra == "all-non-mlx"
Requires-Dist: outlines>=0.1.0; extra == "all-non-mlx"
Requires-Dist: sentence-transformers<6.0.0,>=5.1.0; extra == "all-non-mlx"
Requires-Dist: numpy<2.0.0,>=1.20.0; extra == "all-non-mlx"
Requires-Dist: tiktoken<1.0.0,>=0.5.0; extra == "all-non-mlx"
Requires-Dist: requests<3.0.0,>=2.25.0; extra == "all-non-mlx"
Requires-Dist: beautifulsoup4<5.0.0,>=4.12.0; extra == "all-non-mlx"
Requires-Dist: lxml<6.0.0,>=4.9.0; extra == "all-non-mlx"
Requires-Dist: ddgs<10.0.0,>=9.10.0; python_version >= "3.10" and extra == "all-non-mlx"
Requires-Dist: duckduckgo-search<4.0.0,>=3.8.0; python_version < "3.10" and extra == "all-non-mlx"
Requires-Dist: psutil<6.0.0,>=5.9.0; extra == "all-non-mlx"
Requires-Dist: Pillow<12.0.0,>=10.0.0; extra == "all-non-mlx"
Requires-Dist: pymupdf4llm<1.0.0,>=0.0.20; extra == "all-non-mlx"
Requires-Dist: pymupdf-layout<2.0.0,>=1.26.6; extra == "all-non-mlx"
Requires-Dist: unstructured[docx,odt,pptx,rtf,xlsx]<1.0.0,>=0.10.0; extra == "all-non-mlx"
Requires-Dist: pandas<3.0.0,>=1.0.0; extra == "all-non-mlx"
Requires-Dist: fastapi<1.0.0,>=0.100.0; extra == "all-non-mlx"
Requires-Dist: uvicorn[standard]<1.0.0,>=0.23.0; extra == "all-non-mlx"
Requires-Dist: python-multipart<1.0.0,>=0.0.6; extra == "all-non-mlx"
Requires-Dist: sse-starlette<2.0.0,>=1.6.0; extra == "all-non-mlx"
Requires-Dist: abstractvision>=0.2.0; extra == "all-non-mlx"
Provides-Extra: dev
Requires-Dist: pytest>=7.0.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
Requires-Dist: pytest-mock>=3.10.0; extra == "dev"
Requires-Dist: black>=23.0.0; extra == "dev"
Requires-Dist: isort>=5.12.0; extra == "dev"
Requires-Dist: mypy>=1.5.0; extra == "dev"
Requires-Dist: ruff>=0.1.0; extra == "dev"
Requires-Dist: pre-commit>=3.0.0; extra == "dev"
Provides-Extra: server
Requires-Dist: fastapi<1.0.0,>=0.100.0; extra == "server"
Requires-Dist: uvicorn[standard]<1.0.0,>=0.23.0; extra == "server"
Requires-Dist: python-multipart<1.0.0,>=0.0.6; extra == "server"
Requires-Dist: sse-starlette<2.0.0,>=1.6.0; extra == "server"
Requires-Dist: abstractvision>=0.2.0; extra == "server"
Provides-Extra: vision
Requires-Dist: abstractvision>=0.2.0; extra == "vision"
Provides-Extra: vision-diffusers
Requires-Dist: abstractvision[huggingface]>=0.2.0; extra == "vision-diffusers"
Provides-Extra: vision-sdcpp
Requires-Dist: abstractvision[sdcpp]>=0.2.0; extra == "vision-sdcpp"
Provides-Extra: vision-local
Requires-Dist: abstractvision[local]>=0.2.0; extra == "vision-local"
Provides-Extra: test
Requires-Dist: pytest>=7.0.0; extra == "test"
Requires-Dist: pytest-asyncio>=0.21.0; extra == "test"
Requires-Dist: pytest-mock>=3.10.0; extra == "test"
Requires-Dist: pytest-cov>=4.0.0; extra == "test"
Requires-Dist: responses>=0.23.0; extra == "test"
Requires-Dist: httpx>=0.24.0; extra == "test"
Provides-Extra: docs
Requires-Dist: mkdocs>=1.5.0; extra == "docs"
Requires-Dist: mkdocs-material>=9.0.0; extra == "docs"
Requires-Dist: mkdocstrings[python]>=0.22.0; extra == "docs"
Requires-Dist: mkdocs-autorefs>=0.4.0; extra == "docs"
Provides-Extra: full-dev
Requires-Dist: openai<2.0.0,>=1.0.0; extra == "full-dev"
Requires-Dist: anthropic<1.0.0,>=0.25.0; extra == "full-dev"
Requires-Dist: transformers<6.0.0,>=4.57.1; extra == "full-dev"
Requires-Dist: torch<3.0.0,>=2.6.0; extra == "full-dev"
Requires-Dist: torchvision>=0.17.0; extra == "full-dev"
Requires-Dist: torchaudio>=2.1.0; extra == "full-dev"
Requires-Dist: llama-cpp-python<1.0.0,>=0.2.0; extra == "full-dev"
Requires-Dist: outlines>=0.1.0; extra == "full-dev"
Requires-Dist: mlx<1.0.0,>=0.30.0; extra == "full-dev"
Requires-Dist: mlx-lm<1.0.0,>=0.30.0; extra == "full-dev"
Requires-Dist: vllm<1.0.0,>=0.6.0; extra == "full-dev"
Requires-Dist: sentence-transformers<6.0.0,>=5.1.0; extra == "full-dev"
Requires-Dist: numpy<2.0.0,>=1.20.0; extra == "full-dev"
Requires-Dist: tiktoken<1.0.0,>=0.5.0; extra == "full-dev"
Requires-Dist: requests<3.0.0,>=2.25.0; extra == "full-dev"
Requires-Dist: beautifulsoup4<5.0.0,>=4.12.0; extra == "full-dev"
Requires-Dist: lxml<6.0.0,>=4.9.0; extra == "full-dev"
Requires-Dist: ddgs<10.0.0,>=9.10.0; python_version >= "3.10" and extra == "full-dev"
Requires-Dist: duckduckgo-search<4.0.0,>=3.8.0; python_version < "3.10" and extra == "full-dev"
Requires-Dist: psutil<6.0.0,>=5.9.0; extra == "full-dev"
Requires-Dist: Pillow<12.0.0,>=10.0.0; extra == "full-dev"
Requires-Dist: pymupdf4llm<1.0.0,>=0.0.20; extra == "full-dev"
Requires-Dist: pymupdf-layout<2.0.0,>=1.26.6; extra == "full-dev"
Requires-Dist: unstructured[docx,odt,pptx,rtf,xlsx]<1.0.0,>=0.10.0; extra == "full-dev"
Requires-Dist: pandas<3.0.0,>=1.0.0; extra == "full-dev"
Requires-Dist: fastapi<1.0.0,>=0.100.0; extra == "full-dev"
Requires-Dist: uvicorn[standard]<1.0.0,>=0.23.0; extra == "full-dev"
Requires-Dist: python-multipart<1.0.0,>=0.0.6; extra == "full-dev"
Requires-Dist: sse-starlette<2.0.0,>=1.6.0; extra == "full-dev"
Requires-Dist: abstractvision>=0.2.0; extra == "full-dev"
Requires-Dist: pytest>=7.0.0; extra == "full-dev"
Requires-Dist: pytest-asyncio>=0.21.0; extra == "full-dev"
Requires-Dist: pytest-mock>=3.10.0; extra == "full-dev"
Requires-Dist: pytest-cov>=4.0.0; extra == "full-dev"
Requires-Dist: responses>=0.23.0; extra == "full-dev"
Requires-Dist: black>=23.0.0; extra == "full-dev"
Requires-Dist: isort>=5.12.0; extra == "full-dev"
Requires-Dist: mypy>=1.5.0; extra == "full-dev"
Requires-Dist: ruff>=0.1.0; extra == "full-dev"
Requires-Dist: pre-commit>=3.0.0; extra == "full-dev"
Requires-Dist: mkdocs>=1.5.0; extra == "full-dev"
Requires-Dist: mkdocs-material>=9.0.0; extra == "full-dev"
Requires-Dist: mkdocstrings[python]>=0.22.0; extra == "full-dev"
Requires-Dist: mkdocs-autorefs>=0.4.0; extra == "full-dev"
Dynamic: license-file

# AbstractCore

[![PyPI version](https://img.shields.io/pypi/v/abstractcore.svg)](https://pypi.org/project/abstractcore/)
[![Python Version](https://img.shields.io/pypi/pyversions/abstractcore)](https://pypi.org/project/abstractcore/)
[![license](https://img.shields.io/github/license/lpalbou/AbstractCore)](https://github.com/lpalbou/AbstractCore/blob/main/LICENSE)
[![GitHub stars](https://img.shields.io/github/stars/lpalbou/AbstractCore?style=social)](https://github.com/lpalbou/AbstractCore/stargazers)

Unified LLM Interface
> Write once, run everywhere

AbstractCore is a Python library that provides a unified `create_llm(...)` API across cloud + local LLM providers (OpenAI, Anthropic, Ollama, LMStudio, and more). The default install is intentionally lightweight; add providers and optional subsystems via explicit install extras.

First-class support for:
- sync + async
- streaming + non-streaming
- universal tool calling (native + prompted tool syntax)
- structured output (Pydantic)
- media input (images/audio/video + documents) with explicit, policy-driven fallbacks (*)
- optional capability plugins (`core.voice/core.audio/core.vision`) for deterministic TTS/STT and generative vision (via `abstractvoice` / `abstractvision`)
- glyph visual-text compression for long documents (**)
- unified openai-compatible endpoint for all providers and models

(*) Media input is policy-driven (no silent semantic changes). If a model doesn’t support images, AbstractCore can use a configured vision model to generate short visual observations and inject them into your text-only request (vision fallback). Audio/video attachments are also policy-driven (`audio_policy`, `video_policy`) and may require capability plugins for fallbacks. See [Media Handling](docs/media-handling-system.md) and [Centralized Config](docs/centralized-config.md).
(**) Optional visual-text compression: render long text/PDFs into images and process them with a vision model to reduce token usage. See [Glyph Visual-Text Compression](docs/glyphs.md) (install `pip install "abstractcore[compression]"`; for PDFs also install `pip install "abstractcore[media]"`).

Docs: [Getting Started](docs/getting-started.md) · [FAQ](docs/faq.md) · [Docs Index](docs/README.md) · https://lpalbou.github.io/AbstractCore

## Install

```bash
# Core (small, lightweight default)
pip install abstractcore

# Providers
pip install "abstractcore[openai]"       # OpenAI SDK
pip install "abstractcore[anthropic]"    # Anthropic SDK
pip install "abstractcore[huggingface]"  # Transformers / torch (heavy)
pip install "abstractcore[mlx]"          # Apple Silicon local inference (heavy)
pip install "abstractcore[vllm]"         # NVIDIA CUDA / ROCm (heavy)

# Optional features
pip install "abstractcore[tools]"       # built-in web tools (web_search, skim_websearch, skim_url, fetch_url)
pip install "abstractcore[media]"       # images, PDFs, Office docs
pip install "abstractcore[compression]" # glyph visual-text compression (Pillow-only)
pip install "abstractcore[embeddings]"  # EmbeddingManager + local embedding models
pip install "abstractcore[tokens]"      # precise token counting (tiktoken)
pip install "abstractcore[server]"      # OpenAI-compatible HTTP gateway

# Combine extras (zsh: keep quotes)
pip install "abstractcore[openai,media,tools]"

# Turnkey "everything" installs (pick one)
pip install "abstractcore[all-apple]"    # macOS/Apple Silicon (includes MLX, excludes vLLM)
pip install "abstractcore[all-non-mlx]"  # Linux/Windows/Intel Mac (excludes MLX and vLLM)
pip install "abstractcore[all-gpu]"      # Linux NVIDIA GPU (includes vLLM, excludes MLX)
```

## Quickstart

OpenAI example (requires `pip install "abstractcore[openai]"`):

```python
from abstractcore import create_llm

llm = create_llm("openai", model="gpt-4o-mini")
response = llm.generate("What is the capital of France?")
print(response.content)
```

### Conversation state (`BasicSession`)

```python
from abstractcore import create_llm, BasicSession

session = BasicSession(create_llm("anthropic", model="claude-haiku-4-5"))
print(session.generate("Give me 3 bakery name ideas.").content)
print(session.generate("Pick the best one and explain why.").content)
```

### Streaming

```python
from abstractcore import create_llm

llm = create_llm("ollama", model="qwen3:4b-instruct")
for chunk in llm.generate("Write a short poem about distributed systems.", stream=True):
    print(chunk.content or "", end="", flush=True)
```

### Async

```python
import asyncio
from abstractcore import create_llm

async def main():
    llm = create_llm("openai", model="gpt-4o-mini")
    resp = await llm.agenerate("Give me 5 bullet points about HTTP caching.")
    print(resp.content)

asyncio.run(main())
```

## Token budgets (unified)

```python
from abstractcore import create_llm

llm = create_llm(
    "openai",
    model="gpt-4o-mini",
    max_tokens=8000,        # total budget (input + output)
    max_output_tokens=1200, # output cap
)
```

## Providers (common)

- `openai`: `OPENAI_API_KEY`, optional `OPENAI_BASE_URL`
- `anthropic`: `ANTHROPIC_API_KEY`, optional `ANTHROPIC_BASE_URL`
- `openrouter`: `OPENROUTER_API_KEY`, optional `OPENROUTER_BASE_URL` (default: `https://openrouter.ai/api/v1`)
- `ollama`: local server at `OLLAMA_BASE_URL` (or legacy `OLLAMA_HOST`)
- `lmstudio`: OpenAI-compatible local server at `LMSTUDIO_BASE_URL` (default: `http://localhost:1234/v1`)
- `vllm`: OpenAI-compatible server at `VLLM_BASE_URL` (default: `http://localhost:8000/v1`)
- `openai-compatible`: generic OpenAI-compatible endpoints via `OPENAI_COMPATIBLE_BASE_URL` (default: `http://localhost:1234/v1`)

You can also persist settings (including API keys) via the config CLI:
- `abstractcore --status`
- `abstractcore --configure` (alias: `--config`)
- `abstractcore --set-api-key openai sk-...`

## What’s inside (quick tour)

- Tools: universal tool calling across providers → [Tool Calling](docs/tool-calling.md)
- Built-in tools (optional): web + filesystem helpers (`skim_websearch`, `skim_url`, `fetch_url`, `read_file`, …) → [Tool Calling](docs/tool-calling.md)
- Tool syntax rewriting: `tool_call_tags` (Python) and `agent_format` (server) → [Tool Syntax Rewriting](docs/tool-syntax-rewriting.md)
- Structured output: Pydantic-first with provider-aware strategies → [Structured Output](docs/structured-output.md)
- Media input: images/audio/video + documents (policies + fallbacks) → [Media Handling](docs/media-handling-system.md) and [Vision Capabilities](docs/vision-capabilities.md)
- Capability plugins (optional): deterministic `llm.voice/llm.audio/llm.vision` surfaces → [Capabilities](docs/capabilities.md)
- Glyph visual-text compression: scale long-context document analysis via VLMs → [Glyph Visual-Text Compression](docs/glyphs.md)
- Embeddings and semantic search → [Embeddings](docs/embeddings.md)
- Observability: global event bus + interaction traces → [Architecture](docs/architecture.md), [API Reference (Events)](docs/api-reference.md#eventtype), [Interaction Tracing](docs/interaction-tracing.md)
- MCP (Model Context Protocol): discover tools from MCP servers (HTTP/stdio) → [MCP](docs/mcp.md)
- OpenAI-compatible server: one `/v1` gateway for chat + optional `/v1/images/*` and `/v1/audio/*` endpoints → [Server](docs/server.md)

## Tool calling (passthrough by default)

By default (`execute_tools=False`), AbstractCore:
- returns clean assistant text in `response.content`
- returns structured tool calls in `response.tool_calls` (host/runtime executes them)

```python
from abstractcore import create_llm, tool

@tool
def get_weather(city: str) -> str:
    return f"{city}: 22°C and sunny"

llm = create_llm("openai", model="gpt-4o-mini")
resp = llm.generate("What's the weather in Paris? Use the tool.", tools=[get_weather])

print(resp.content)
print(resp.tool_calls)
```

If you need tool-call markup preserved/re-written in `content` for downstream parsers, pass
`tool_call_tags=...` (e.g. `"qwen3"`, `"llama3"`, `"xml"`). See [Tool Syntax Rewriting](docs/tool-syntax-rewriting.md).

## Structured output

```python
from pydantic import BaseModel
from abstractcore import create_llm

class Answer(BaseModel):
    title: str
    bullets: list[str]

llm = create_llm("openai", model="gpt-4o-mini")
answer = llm.generate("Summarize HTTP/3 in 3 bullets.", response_model=Answer)
print(answer.bullets)
```

## Media input (images/audio/video)

Requires `pip install "abstractcore[media]"`.

```python
from abstractcore import create_llm

llm = create_llm("anthropic", model="claude-haiku-4-5")
resp = llm.generate("Describe the image.", media=["./image.png"])
print(resp.content)
```

Notes:
- **Images**: use a vision-capable model, or configure **vision fallback** for text-only models (`abstractcore --config`; `abstractcore --set-vision-provider PROVIDER MODEL`).
- **Video**: `video_policy="auto"` (default) uses native video when supported, otherwise samples frames (requires `ffmpeg`/`ffprobe`) and routes them through image/vision handling (so you still need a vision-capable model or vision fallback configured).
- **Audio**: use an audio-capable model, or set `audio_policy="auto"`/`"speech_to_text"` and install `abstractvoice` for speech-to-text.

Configure defaults (optional):

```bash
abstractcore --status
abstractcore --set-vision-provider lmstudio qwen/qwen3-vl-4b
abstractcore --set-audio-strategy auto
abstractcore --set-video-strategy auto
```

See [Media Handling](docs/media-handling-system.md) and [Vision Capabilities](docs/vision-capabilities.md).

## HTTP server (OpenAI-compatible gateway)

```bash
pip install "abstractcore[server]"
python -m abstractcore.server.app
```

Use any OpenAI-compatible client, and route to any provider/model via `model="provider/model"`:

```python
from openai import OpenAI

client = OpenAI(base_url="http://localhost:8000/v1", api_key="unused")
resp = client.chat.completions.create(
    model="ollama/qwen3:4b-instruct",
    messages=[{"role": "user", "content": "Hello from the gateway!"}],
)
print(resp.choices[0].message.content)
```

See [Server](docs/server.md).

## CLI (optional)

Interactive chat:

```bash
abstractcore-chat --provider openai --model gpt-4o-mini
abstractcore-chat --provider lmstudio --model qwen/qwen3-4b-2507 --base-url http://localhost:1234/v1
abstractcore-chat --provider openrouter --model openai/gpt-4o-mini
```

Token limits:
- startup: `abstractcore-chat --max-tokens 8192 --max-output-tokens 1024 ...`
- in-REPL: `/max-tokens 8192` and `/max-output-tokens 1024`

## Built-in CLI apps

AbstractCore also ships with ready-to-use CLI apps:
- `summarizer`, `extractor`, `judge`, `intent`, `deepsearch` (see [docs/apps/](docs/apps/))

## Documentation map

Start here:
- [Docs Index](docs/README.md) — navigation for all docs
- [Prerequisites](docs/prerequisites.md) — provider setup (keys, local servers, hardware notes)
- [Getting Started](docs/getting-started.md) — first call + core concepts
- [FAQ](docs/faq.md) — common questions and setup gotchas
- [Examples](docs/examples.md) — end-to-end patterns and recipes
- [Troubleshooting](docs/troubleshooting.md) — common failures and fixes

Core features:
- [Tool Calling](docs/tool-calling.md) — universal tools across providers (native + prompted)
- [Tool Syntax Rewriting](docs/tool-syntax-rewriting.md) — rewrite tool-call syntax for different runtimes/clients
- [Structured Output](docs/structured-output.md) — schema enforcement + retry strategies
- [Media Handling](docs/media-handling-system.md) — images/audio/video + documents (policies + fallbacks)
- [Vision Capabilities](docs/vision-capabilities.md) — image/video input, vision fallback, and how this differs from generative vision
- [Glyph Visual-Text Compression](docs/glyphs.md) — compress long documents into images for VLMs
- [Generation Parameters](docs/generation-parameters.md) — unified parameter vocabulary and provider quirks
- [Session Management](docs/session.md) — conversation history, persistence, and compaction
- [Embeddings](docs/embeddings.md) — embeddings API and RAG building blocks
- [Async Guide](docs/async-guide.md) — async patterns, concurrency, best practices
- [Centralized Config](docs/centralized-config.md) — `~/.abstractcore/config/abstractcore.json` + CLI config commands
- [Capabilities](docs/capabilities.md) — supported features and current limitations
- [Interaction Tracing](docs/interaction-tracing.md) — inspect prompts/responses/usage for observability
- [MCP](docs/mcp.md) — consume MCP tool servers (HTTP/stdio) as tool sources

Reference and internals:
- [Architecture](docs/architecture.md) — system overview + event system
- [API (Python)](docs/api.md) — how to use the public API
- [API Reference](docs/api-reference.md) — Python API (including events)
- [Server](docs/server.md) — OpenAI-compatible gateway with tool/media support
- [CLI Guide](docs/acore-cli.md) — interactive `abstractcore-chat` walkthrough

Project:
- [Changelog](CHANGELOG.md) — version history and upgrade notes
- [Contributing](CONTRIBUTING.md) — dev setup and contribution guidelines
- [Security](SECURITY.md) — responsible vulnerability reporting
- [Acknowledgements](ACKNOWLEDGEMENTS.md) — upstream projects and communities

## License

MIT
