Metadata-Version: 2.4
Name: abstractcore
Version: 2.13.2
Summary: Unified interface to all LLM providers with essential infrastructure for tool calling, streaming, and model management
Author-email: Laurent-Philippe Albou <contact@abstractcore.ai>
Maintainer-email: Laurent-Philippe Albou <contact@abstractcore.ai>
License-Expression: MIT
Project-URL: Homepage, https://lpalbou.github.io/AbstractCore
Project-URL: Documentation, https://github.com/lpalbou/AbstractCore#readme
Project-URL: Repository, https://github.com/lpalbou/AbstractCore
Project-URL: Bug Tracker, https://github.com/lpalbou/AbstractCore/issues
Project-URL: Changelog, https://github.com/lpalbou/AbstractCore/blob/main/CHANGELOG.md
Keywords: llm,openai,anthropic,ollama,lmstudio,huggingface,mlx,ai,machine-learning,natural-language-processing,tool-calling,streaming
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Information Technology
Classifier: Intended Audience :: Science/Research
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Internet :: WWW/HTTP :: HTTP Servers
Classifier: Typing :: Typed
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: pydantic<3.0.0,>=2.0.0
Requires-Dist: httpx<1.0.0,>=0.24.0
Provides-Extra: openai
Requires-Dist: openai<2.0.0,>=1.0.0; extra == "openai"
Provides-Extra: anthropic
Requires-Dist: anthropic<1.0.0,>=0.25.0; extra == "anthropic"
Provides-Extra: remote
Requires-Dist: openai<2.0.0,>=1.0.0; extra == "remote"
Requires-Dist: anthropic<1.0.0,>=0.25.0; extra == "remote"
Provides-Extra: openrouter
Provides-Extra: portkey
Provides-Extra: openai-compatible
Provides-Extra: ollama
Provides-Extra: lmstudio
Provides-Extra: huggingface
Requires-Dist: transformers<6.0.0,>=4.57.1; extra == "huggingface"
Requires-Dist: torch<3.0.0,>=2.6.0; extra == "huggingface"
Requires-Dist: torchvision>=0.17.0; extra == "huggingface"
Requires-Dist: torchaudio>=2.1.0; extra == "huggingface"
Requires-Dist: llama-cpp-python<1.0.0,>=0.2.0; extra == "huggingface"
Requires-Dist: outlines>=0.1.0; extra == "huggingface"
Provides-Extra: mlx
Requires-Dist: mlx<1.0.0,>=0.30.0; extra == "mlx"
Requires-Dist: mlx-lm<1.0.0,>=0.30.0; extra == "mlx"
Requires-Dist: outlines>=0.1.0; extra == "mlx"
Provides-Extra: mlx-bench
Requires-Dist: matplotlib<4.0.0,>=3.8.0; extra == "mlx-bench"
Provides-Extra: vllm
Requires-Dist: vllm<1.0.0,>=0.6.0; extra == "vllm"
Provides-Extra: embeddings
Requires-Dist: sentence-transformers<6.0.0,>=5.1.0; extra == "embeddings"
Requires-Dist: numpy<2.0.0,>=1.20.0; python_version < "3.13" and extra == "embeddings"
Requires-Dist: numpy<3.0.0,>=2.1.0; python_version >= "3.13" and extra == "embeddings"
Provides-Extra: tokens
Requires-Dist: tiktoken<1.0.0,>=0.5.0; extra == "tokens"
Provides-Extra: tools
Requires-Dist: requests<3.0.0,>=2.25.0; extra == "tools"
Requires-Dist: beautifulsoup4<5.0.0,>=4.12.0; extra == "tools"
Requires-Dist: lxml<6.0.0,>=4.9.0; extra == "tools"
Requires-Dist: ddgs<10.0.0,>=9.10.0; python_version >= "3.10" and extra == "tools"
Requires-Dist: duckduckgo-search<4.0.0,>=3.8.0; python_version < "3.10" and extra == "tools"
Requires-Dist: psutil<6.0.0,>=5.9.0; extra == "tools"
Provides-Extra: tool
Requires-Dist: requests<3.0.0,>=2.25.0; extra == "tool"
Requires-Dist: beautifulsoup4<5.0.0,>=4.12.0; extra == "tool"
Requires-Dist: lxml<6.0.0,>=4.9.0; extra == "tool"
Requires-Dist: ddgs<10.0.0,>=9.10.0; python_version >= "3.10" and extra == "tool"
Requires-Dist: duckduckgo-search<4.0.0,>=3.8.0; python_version < "3.10" and extra == "tool"
Requires-Dist: psutil<6.0.0,>=5.9.0; extra == "tool"
Provides-Extra: media
Requires-Dist: Pillow<12.0.0,>=10.0.0; extra == "media"
Requires-Dist: pymupdf4llm<1.0.0,>=0.0.20; extra == "media"
Requires-Dist: pymupdf-layout<2.0.0,>=1.26.6; extra == "media"
Requires-Dist: unstructured[docx,odt,pptx,rtf,xlsx]<1.0.0,>=0.10.0; extra == "media"
Requires-Dist: pandas<3.0.0,>=1.0.0; extra == "media"
Provides-Extra: compression
Requires-Dist: Pillow<12.0.0,>=10.0.0; extra == "compression"
Provides-Extra: all
Requires-Dist: openai<2.0.0,>=1.0.0; extra == "all"
Requires-Dist: anthropic<1.0.0,>=0.25.0; extra == "all"
Requires-Dist: transformers<6.0.0,>=4.57.1; extra == "all"
Requires-Dist: torch<3.0.0,>=2.6.0; extra == "all"
Requires-Dist: torchvision>=0.17.0; extra == "all"
Requires-Dist: torchaudio>=2.1.0; extra == "all"
Requires-Dist: llama-cpp-python<1.0.0,>=0.2.0; extra == "all"
Requires-Dist: outlines>=0.1.0; extra == "all"
Requires-Dist: mlx<1.0.0,>=0.30.0; extra == "all"
Requires-Dist: mlx-lm<1.0.0,>=0.30.0; extra == "all"
Requires-Dist: vllm<1.0.0,>=0.6.0; extra == "all"
Requires-Dist: sentence-transformers<6.0.0,>=5.1.0; extra == "all"
Requires-Dist: numpy<2.0.0,>=1.20.0; python_version < "3.13" and extra == "all"
Requires-Dist: numpy<3.0.0,>=2.1.0; python_version >= "3.13" and extra == "all"
Requires-Dist: tiktoken<1.0.0,>=0.5.0; extra == "all"
Requires-Dist: requests<3.0.0,>=2.25.0; extra == "all"
Requires-Dist: beautifulsoup4<5.0.0,>=4.12.0; extra == "all"
Requires-Dist: lxml<6.0.0,>=4.9.0; extra == "all"
Requires-Dist: ddgs<10.0.0,>=9.10.0; python_version >= "3.10" and extra == "all"
Requires-Dist: duckduckgo-search<4.0.0,>=3.8.0; python_version < "3.10" and extra == "all"
Requires-Dist: psutil<6.0.0,>=5.9.0; extra == "all"
Requires-Dist: Pillow<12.0.0,>=10.0.0; extra == "all"
Requires-Dist: pymupdf4llm<1.0.0,>=0.0.20; extra == "all"
Requires-Dist: pymupdf-layout<2.0.0,>=1.26.6; extra == "all"
Requires-Dist: unstructured[docx,odt,pptx,rtf,xlsx]<1.0.0,>=0.10.0; extra == "all"
Requires-Dist: pandas<3.0.0,>=1.0.0; extra == "all"
Requires-Dist: fastapi<1.0.0,>=0.100.0; extra == "all"
Requires-Dist: uvicorn[standard]<1.0.0,>=0.23.0; extra == "all"
Requires-Dist: python-multipart<1.0.0,>=0.0.6; extra == "all"
Requires-Dist: sse-starlette<2.0.0,>=1.6.0; extra == "all"
Requires-Dist: abstractvision>=0.2.0; extra == "all"
Provides-Extra: all-apple
Requires-Dist: openai<2.0.0,>=1.0.0; extra == "all-apple"
Requires-Dist: anthropic<1.0.0,>=0.25.0; extra == "all-apple"
Requires-Dist: transformers<6.0.0,>=4.57.1; extra == "all-apple"
Requires-Dist: torch<3.0.0,>=2.6.0; extra == "all-apple"
Requires-Dist: torchvision>=0.17.0; extra == "all-apple"
Requires-Dist: torchaudio>=2.1.0; extra == "all-apple"
Requires-Dist: llama-cpp-python<1.0.0,>=0.2.0; extra == "all-apple"
Requires-Dist: outlines>=0.1.0; extra == "all-apple"
Requires-Dist: mlx<1.0.0,>=0.30.0; extra == "all-apple"
Requires-Dist: mlx-lm<1.0.0,>=0.30.0; extra == "all-apple"
Requires-Dist: sentence-transformers<6.0.0,>=5.1.0; extra == "all-apple"
Requires-Dist: numpy<2.0.0,>=1.20.0; python_version < "3.13" and extra == "all-apple"
Requires-Dist: numpy<3.0.0,>=2.1.0; python_version >= "3.13" and extra == "all-apple"
Requires-Dist: tiktoken<1.0.0,>=0.5.0; extra == "all-apple"
Requires-Dist: requests<3.0.0,>=2.25.0; extra == "all-apple"
Requires-Dist: beautifulsoup4<5.0.0,>=4.12.0; extra == "all-apple"
Requires-Dist: lxml<6.0.0,>=4.9.0; extra == "all-apple"
Requires-Dist: ddgs<10.0.0,>=9.10.0; python_version >= "3.10" and extra == "all-apple"
Requires-Dist: duckduckgo-search<4.0.0,>=3.8.0; python_version < "3.10" and extra == "all-apple"
Requires-Dist: psutil<6.0.0,>=5.9.0; extra == "all-apple"
Requires-Dist: Pillow<12.0.0,>=10.0.0; extra == "all-apple"
Requires-Dist: pymupdf4llm<1.0.0,>=0.0.20; extra == "all-apple"
Requires-Dist: pymupdf-layout<2.0.0,>=1.26.6; extra == "all-apple"
Requires-Dist: unstructured[docx,odt,pptx,rtf,xlsx]<1.0.0,>=0.10.0; extra == "all-apple"
Requires-Dist: pandas<3.0.0,>=1.0.0; extra == "all-apple"
Requires-Dist: fastapi<1.0.0,>=0.100.0; extra == "all-apple"
Requires-Dist: uvicorn[standard]<1.0.0,>=0.23.0; extra == "all-apple"
Requires-Dist: python-multipart<1.0.0,>=0.0.6; extra == "all-apple"
Requires-Dist: sse-starlette<2.0.0,>=1.6.0; extra == "all-apple"
Requires-Dist: abstractvision>=0.2.0; extra == "all-apple"
Provides-Extra: all-gpu
Requires-Dist: openai<2.0.0,>=1.0.0; extra == "all-gpu"
Requires-Dist: anthropic<1.0.0,>=0.25.0; extra == "all-gpu"
Requires-Dist: transformers<6.0.0,>=4.57.1; extra == "all-gpu"
Requires-Dist: torch<3.0.0,>=2.6.0; extra == "all-gpu"
Requires-Dist: torchvision>=0.17.0; extra == "all-gpu"
Requires-Dist: torchaudio>=2.1.0; extra == "all-gpu"
Requires-Dist: llama-cpp-python<1.0.0,>=0.2.0; extra == "all-gpu"
Requires-Dist: outlines>=0.1.0; extra == "all-gpu"
Requires-Dist: vllm<1.0.0,>=0.6.0; extra == "all-gpu"
Requires-Dist: sentence-transformers<6.0.0,>=5.1.0; extra == "all-gpu"
Requires-Dist: numpy<2.0.0,>=1.20.0; python_version < "3.13" and extra == "all-gpu"
Requires-Dist: numpy<3.0.0,>=2.1.0; python_version >= "3.13" and extra == "all-gpu"
Requires-Dist: tiktoken<1.0.0,>=0.5.0; extra == "all-gpu"
Requires-Dist: requests<3.0.0,>=2.25.0; extra == "all-gpu"
Requires-Dist: beautifulsoup4<5.0.0,>=4.12.0; extra == "all-gpu"
Requires-Dist: lxml<6.0.0,>=4.9.0; extra == "all-gpu"
Requires-Dist: ddgs<10.0.0,>=9.10.0; python_version >= "3.10" and extra == "all-gpu"
Requires-Dist: duckduckgo-search<4.0.0,>=3.8.0; python_version < "3.10" and extra == "all-gpu"
Requires-Dist: psutil<6.0.0,>=5.9.0; extra == "all-gpu"
Requires-Dist: Pillow<12.0.0,>=10.0.0; extra == "all-gpu"
Requires-Dist: pymupdf4llm<1.0.0,>=0.0.20; extra == "all-gpu"
Requires-Dist: pymupdf-layout<2.0.0,>=1.26.6; extra == "all-gpu"
Requires-Dist: unstructured[docx,odt,pptx,rtf,xlsx]<1.0.0,>=0.10.0; extra == "all-gpu"
Requires-Dist: pandas<3.0.0,>=1.0.0; extra == "all-gpu"
Requires-Dist: fastapi<1.0.0,>=0.100.0; extra == "all-gpu"
Requires-Dist: uvicorn[standard]<1.0.0,>=0.23.0; extra == "all-gpu"
Requires-Dist: python-multipart<1.0.0,>=0.0.6; extra == "all-gpu"
Requires-Dist: sse-starlette<2.0.0,>=1.6.0; extra == "all-gpu"
Requires-Dist: abstractvision>=0.2.0; extra == "all-gpu"
Provides-Extra: all-non-mlx
Requires-Dist: openai<2.0.0,>=1.0.0; extra == "all-non-mlx"
Requires-Dist: anthropic<1.0.0,>=0.25.0; extra == "all-non-mlx"
Requires-Dist: transformers<6.0.0,>=4.57.1; extra == "all-non-mlx"
Requires-Dist: torch<3.0.0,>=2.6.0; extra == "all-non-mlx"
Requires-Dist: torchvision>=0.17.0; extra == "all-non-mlx"
Requires-Dist: torchaudio>=2.1.0; extra == "all-non-mlx"
Requires-Dist: llama-cpp-python<1.0.0,>=0.2.0; extra == "all-non-mlx"
Requires-Dist: outlines>=0.1.0; extra == "all-non-mlx"
Requires-Dist: sentence-transformers<6.0.0,>=5.1.0; extra == "all-non-mlx"
Requires-Dist: numpy<2.0.0,>=1.20.0; python_version < "3.13" and extra == "all-non-mlx"
Requires-Dist: numpy<3.0.0,>=2.1.0; python_version >= "3.13" and extra == "all-non-mlx"
Requires-Dist: tiktoken<1.0.0,>=0.5.0; extra == "all-non-mlx"
Requires-Dist: requests<3.0.0,>=2.25.0; extra == "all-non-mlx"
Requires-Dist: beautifulsoup4<5.0.0,>=4.12.0; extra == "all-non-mlx"
Requires-Dist: lxml<6.0.0,>=4.9.0; extra == "all-non-mlx"
Requires-Dist: ddgs<10.0.0,>=9.10.0; python_version >= "3.10" and extra == "all-non-mlx"
Requires-Dist: duckduckgo-search<4.0.0,>=3.8.0; python_version < "3.10" and extra == "all-non-mlx"
Requires-Dist: psutil<6.0.0,>=5.9.0; extra == "all-non-mlx"
Requires-Dist: Pillow<12.0.0,>=10.0.0; extra == "all-non-mlx"
Requires-Dist: pymupdf4llm<1.0.0,>=0.0.20; extra == "all-non-mlx"
Requires-Dist: pymupdf-layout<2.0.0,>=1.26.6; extra == "all-non-mlx"
Requires-Dist: unstructured[docx,odt,pptx,rtf,xlsx]<1.0.0,>=0.10.0; extra == "all-non-mlx"
Requires-Dist: pandas<3.0.0,>=1.0.0; extra == "all-non-mlx"
Requires-Dist: fastapi<1.0.0,>=0.100.0; extra == "all-non-mlx"
Requires-Dist: uvicorn[standard]<1.0.0,>=0.23.0; extra == "all-non-mlx"
Requires-Dist: python-multipart<1.0.0,>=0.0.6; extra == "all-non-mlx"
Requires-Dist: sse-starlette<2.0.0,>=1.6.0; extra == "all-non-mlx"
Requires-Dist: abstractvision>=0.2.0; extra == "all-non-mlx"
Provides-Extra: dev
Requires-Dist: pytest>=7.0.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
Requires-Dist: pytest-mock>=3.10.0; extra == "dev"
Requires-Dist: black>=23.0.0; extra == "dev"
Requires-Dist: isort>=5.12.0; extra == "dev"
Requires-Dist: mypy>=1.5.0; extra == "dev"
Requires-Dist: ruff>=0.1.0; extra == "dev"
Requires-Dist: pre-commit>=3.0.0; extra == "dev"
Provides-Extra: server
Requires-Dist: fastapi<1.0.0,>=0.100.0; extra == "server"
Requires-Dist: uvicorn[standard]<1.0.0,>=0.23.0; extra == "server"
Requires-Dist: python-multipart<1.0.0,>=0.0.6; extra == "server"
Requires-Dist: sse-starlette<2.0.0,>=1.6.0; extra == "server"
Requires-Dist: abstractvision>=0.2.0; extra == "server"
Provides-Extra: vision
Requires-Dist: abstractvision>=0.2.0; extra == "vision"
Provides-Extra: vision-diffusers
Requires-Dist: abstractvision[huggingface]>=0.2.0; extra == "vision-diffusers"
Provides-Extra: vision-sdcpp
Requires-Dist: abstractvision[sdcpp]>=0.2.0; extra == "vision-sdcpp"
Provides-Extra: vision-local
Requires-Dist: abstractvision[local]>=0.2.0; extra == "vision-local"
Provides-Extra: test
Requires-Dist: pytest>=7.0.0; extra == "test"
Requires-Dist: pytest-asyncio>=0.21.0; extra == "test"
Requires-Dist: pytest-mock>=3.10.0; extra == "test"
Requires-Dist: pytest-cov>=4.0.0; extra == "test"
Requires-Dist: responses>=0.23.0; extra == "test"
Requires-Dist: httpx>=0.24.0; extra == "test"
Requires-Dist: tomli>=2.0.0; python_version < "3.11" and extra == "test"
Requires-Dist: openai<2.0.0,>=1.0.0; extra == "test"
Requires-Dist: anthropic<1.0.0,>=0.25.0; extra == "test"
Requires-Dist: numpy<2.0.0,>=1.20.0; python_version < "3.13" and extra == "test"
Requires-Dist: numpy<3.0.0,>=2.1.0; python_version >= "3.13" and extra == "test"
Requires-Dist: Pillow<12.0.0,>=10.0.0; extra == "test"
Requires-Dist: fastapi<1.0.0,>=0.100.0; extra == "test"
Requires-Dist: python-multipart<1.0.0,>=0.0.6; extra == "test"
Requires-Dist: uvicorn<1.0.0,>=0.23.0; extra == "test"
Requires-Dist: requests<3.0.0,>=2.25.0; extra == "test"
Requires-Dist: beautifulsoup4<5.0.0,>=4.12.0; extra == "test"
Requires-Dist: lxml<6.0.0,>=4.9.0; extra == "test"
Provides-Extra: docs
Requires-Dist: mkdocs>=1.5.0; extra == "docs"
Requires-Dist: mkdocs-material>=9.0.0; extra == "docs"
Requires-Dist: mkdocstrings[python]>=0.22.0; extra == "docs"
Requires-Dist: mkdocs-autorefs>=0.4.0; extra == "docs"
Provides-Extra: full-dev
Requires-Dist: openai<2.0.0,>=1.0.0; extra == "full-dev"
Requires-Dist: anthropic<1.0.0,>=0.25.0; extra == "full-dev"
Requires-Dist: transformers<6.0.0,>=4.57.1; extra == "full-dev"
Requires-Dist: torch<3.0.0,>=2.6.0; extra == "full-dev"
Requires-Dist: torchvision>=0.17.0; extra == "full-dev"
Requires-Dist: torchaudio>=2.1.0; extra == "full-dev"
Requires-Dist: llama-cpp-python<1.0.0,>=0.2.0; extra == "full-dev"
Requires-Dist: outlines>=0.1.0; extra == "full-dev"
Requires-Dist: mlx<1.0.0,>=0.30.0; extra == "full-dev"
Requires-Dist: mlx-lm<1.0.0,>=0.30.0; extra == "full-dev"
Requires-Dist: vllm<1.0.0,>=0.6.0; extra == "full-dev"
Requires-Dist: sentence-transformers<6.0.0,>=5.1.0; extra == "full-dev"
Requires-Dist: numpy<2.0.0,>=1.20.0; python_version < "3.13" and extra == "full-dev"
Requires-Dist: numpy<3.0.0,>=2.1.0; python_version >= "3.13" and extra == "full-dev"
Requires-Dist: tiktoken<1.0.0,>=0.5.0; extra == "full-dev"
Requires-Dist: requests<3.0.0,>=2.25.0; extra == "full-dev"
Requires-Dist: beautifulsoup4<5.0.0,>=4.12.0; extra == "full-dev"
Requires-Dist: lxml<6.0.0,>=4.9.0; extra == "full-dev"
Requires-Dist: ddgs<10.0.0,>=9.10.0; python_version >= "3.10" and extra == "full-dev"
Requires-Dist: duckduckgo-search<4.0.0,>=3.8.0; python_version < "3.10" and extra == "full-dev"
Requires-Dist: psutil<6.0.0,>=5.9.0; extra == "full-dev"
Requires-Dist: Pillow<12.0.0,>=10.0.0; extra == "full-dev"
Requires-Dist: pymupdf4llm<1.0.0,>=0.0.20; extra == "full-dev"
Requires-Dist: pymupdf-layout<2.0.0,>=1.26.6; extra == "full-dev"
Requires-Dist: unstructured[docx,odt,pptx,rtf,xlsx]<1.0.0,>=0.10.0; extra == "full-dev"
Requires-Dist: pandas<3.0.0,>=1.0.0; extra == "full-dev"
Requires-Dist: fastapi<1.0.0,>=0.100.0; extra == "full-dev"
Requires-Dist: uvicorn[standard]<1.0.0,>=0.23.0; extra == "full-dev"
Requires-Dist: python-multipart<1.0.0,>=0.0.6; extra == "full-dev"
Requires-Dist: sse-starlette<2.0.0,>=1.6.0; extra == "full-dev"
Requires-Dist: abstractvision>=0.2.0; extra == "full-dev"
Requires-Dist: pytest>=7.0.0; extra == "full-dev"
Requires-Dist: pytest-asyncio>=0.21.0; extra == "full-dev"
Requires-Dist: pytest-mock>=3.10.0; extra == "full-dev"
Requires-Dist: pytest-cov>=4.0.0; extra == "full-dev"
Requires-Dist: responses>=0.23.0; extra == "full-dev"
Requires-Dist: black>=23.0.0; extra == "full-dev"
Requires-Dist: isort>=5.12.0; extra == "full-dev"
Requires-Dist: mypy>=1.5.0; extra == "full-dev"
Requires-Dist: ruff>=0.1.0; extra == "full-dev"
Requires-Dist: pre-commit>=3.0.0; extra == "full-dev"
Requires-Dist: mkdocs>=1.5.0; extra == "full-dev"
Requires-Dist: mkdocs-material>=9.0.0; extra == "full-dev"
Requires-Dist: mkdocstrings[python]>=0.22.0; extra == "full-dev"
Requires-Dist: mkdocs-autorefs>=0.4.0; extra == "full-dev"
Dynamic: license-file

# AbstractCore

[![PyPI version](https://img.shields.io/pypi/v/abstractcore.svg)](https://pypi.org/project/abstractcore/)
[![CI](https://github.com/lpalbou/AbstractCore/actions/workflows/ci.yml/badge.svg)](https://github.com/lpalbou/AbstractCore/actions/workflows/ci.yml)
[![Tested Python](https://img.shields.io/badge/dynamic/yaml?url=https%3A%2F%2Fraw.githubusercontent.com%2Flpalbou%2FAbstractCore%2Fmain%2F.github%2Fworkflows%2Fci.yml&query=%24.jobs.test.strategy.matrix%5B%22python-version%22%5D&label=tested%20python&color=blue)](https://github.com/lpalbou/AbstractCore/actions/workflows/ci.yml)
[![license](https://img.shields.io/github/license/lpalbou/AbstractCore)](https://github.com/lpalbou/AbstractCore/blob/main/LICENSE)
[![GitHub stars](https://img.shields.io/github/stars/lpalbou/AbstractCore?style=social)](https://github.com/lpalbou/AbstractCore/stargazers)

Unified LLM Interface
> Write once, run everywhere

AbstractCore is an offline-capable, open-source-first LLM infrastructure layer
for Python applications. It gives you one `create_llm(...)` API across local
runtimes, self-hosted servers, cloud APIs, and OpenAI-compatible gateways.

Use it in-process from Python, or run it as a universal `/v1` endpoint for apps
that already speak the OpenAI API. The same application can run fully offline
once local model assets are installed, stay private on your own inference
server, or route to hosted providers when you want managed capacity.

The goal is simple: put LLM capability at your fingertips without tying your
product to a vendor, network connection, or model family. AbstractCore keeps
application code portable while the model underneath moves between OpenAI,
Anthropic, Ollama, LM Studio, MLX, HuggingFace/GGUF, vLLM, OpenRouter, Portkey,
or any OpenAI-compatible backend.

The default install is intentionally lightweight; add providers and optional
subsystems via explicit install extras. For local runtimes, AbstractCore is
cache-first and offline-first: it will not silently download model weights; you
pull or prefetch the models you want, then run without internet when your
chosen provider and tools are local.

First-class support for:
- offline-capable local operation with explicit model setup (no silent downloads)
- local/open-weight model backends (Ollama, LM Studio, MLX, HuggingFace/GGUF, vLLM)
- cloud, hosted gateway, and generic OpenAI-compatible providers
- sync + async
- streaming + non-streaming
- universal tool calling (native + prompted tool syntax)
- structured output (Pydantic)
- unified generation parameters, capability detection, and provider quirks
- session memory, prompt caching, events, tracing, and retry-aware reliability hooks
- media input (images/audio/video + documents) with explicit, policy-driven fallbacks (*)
- optional capability plugins (`core.voice/core.audio/core.vision`) for deterministic TTS/STT and generative vision (via `abstractvoice` / `abstractvision`)
- glyph visual-text compression for long documents (**)
- optional OpenAI-compatible `/v1` gateway server (multi-provider) and single-model endpoint

(*) Media input is policy-driven (no silent semantic changes). If a model doesn’t support images, AbstractCore can use a configured vision model to generate short visual observations and inject them into your text-only request (vision fallback). Audio/video attachments are also policy-driven (`audio_policy`, `video_policy`) and may require capability plugins for fallbacks. See [Media Handling](docs/media-handling-system.md) and [Centralized Config](docs/centralized-config.md).
(**) Optional visual-text compression: render long text/PDFs into images and process them with a vision model to reduce token usage. See [Glyph Visual-Text Compression](docs/glyphs.md) (install `pip install "abstractcore[compression]"`; for PDFs also install `pip install "abstractcore[media]"`).

Docs: [Getting Started](docs/getting-started.md) · [FAQ](docs/faq.md) · [Docs Index](docs/README.md) · https://lpalbou.github.io/AbstractCore

## Why AbstractCore

Many libraries can call an LLM. AbstractCore is for the messy middle of real
applications, where you need the same product code to survive different model
families, local inference servers, API dialects, offline deployments, and
capability gaps.

Open-source and self-hosted models are first-class, not a demo path. AbstractCore
handles the things that often break when you move beyond a single hosted API:
prompted vs native tools, schema-following differences, structured-output retry,
reasoning text, media support, token budget vocabulary, local server discovery,
and prompt/cache behavior.

That makes it a practical foundation for privacy-sensitive assistants, local
developer tools, document workflows, research machines, edge deployments, and
cloud-backed production services. You can build remote-first products, fully
local products, or hybrid products that move between the two as cost, privacy,
latency, and hardware constraints change.

Use AbstractCore when you want a focused provider layer that stays close to your
application code. Use the wider AbstractFramework stack when you also need
durable runtime execution, agents, flows, gateways, agentic CLI surfaces, memory,
or assistant applications such as
[AbstractAssistant](https://github.com/lpalbou/abstractassistant).

## AbstractFramework ecosystem

AbstractCore is part of the **AbstractFramework** ecosystem:

- **AbstractFramework (umbrella)**: https://github.com/lpalbou/AbstractFramework
- **AbstractCore (this package)**: provider-agnostic LLM I/O + reliability primitives
- **AbstractRuntime**: durable tool/effect execution, workflows, and state persistence (recommended host runtime) — https://github.com/lpalbou/abstractruntime
- **Wider stack**: agents, flows, gateway control, agentic CLI integrations, memory, semantics, coding tools, and digital assistant surfaces built on the same foundation

By default, AbstractCore is **pass-through for tools** (`execute_tools=False`): it returns structured tool calls in `response.tool_calls`, and your runtime decides *whether/how* to execute them (policy, sandboxing, retries, persistence). See [Tool Calling](docs/tool-calling.md) and [Architecture](docs/architecture.md).

```mermaid
graph LR
  APP["Your app"] --> AC["AbstractCore"]
  AF["AbstractFramework optional"] --> AC
  AF --> RT["AbstractRuntime / Agent / Flow / Gateway"]
  AC --> P["Provider adapter"]
  P --> LLM["LLM backend"]
  AC -.->|tool calls| RT
  RT -.->|tool results| AC
```

## Install

Choose the smallest install that matches where your models run. Extras compose,
so you can start with `abstractcore[remote]` and add `media`, `tools`, `server`,
or local runtime extras as your app grows.

```bash
# Core: local HTTP servers and gateways that need no SDK
# Includes Ollama, LM Studio, OpenRouter, Portkey, and OpenAI-compatible /v1 endpoints
pip install abstractcore

# Hosted API SDKs (OpenAI + Anthropic). OpenRouter/Portkey still work from core.
pip install "abstractcore[remote]"

# Individual provider SDKs / local runtimes
pip install "abstractcore[openai]"       # OpenAI SDK
pip install "abstractcore[anthropic]"    # Anthropic SDK
pip install "abstractcore[huggingface]"  # Transformers / torch (heavy)
pip install "abstractcore[mlx]"          # Apple Silicon local inference (heavy)
pip install "abstractcore[vllm]"         # NVIDIA CUDA / ROCm (heavy)

# Optional application features
pip install "abstractcore[tools]"       # built-in web tools (web_search, skim_websearch, skim_url, fetch_url)
pip install "abstractcore[media]"       # images, PDFs, Office docs
pip install "abstractcore[compression]" # glyph visual-text compression (Pillow-only)
pip install "abstractcore[embeddings]"  # EmbeddingManager + local embedding models
pip install "abstractcore[tokens]"      # precise token counting (tiktoken)
pip install "abstractcore[server]"      # OpenAI-compatible HTTP gateway

# Combine extras (zsh: keep quotes)
pip install "abstractcore[remote,media,tools]"

# Turnkey local-runtime installs
pip install "abstractcore[all-apple]"    # Apple Silicon: remote SDKs + HF/GGUF + MLX + features + server
pip install "abstractcore[all-gpu]"      # NVIDIA GPU: remote SDKs + HF/GGUF + vLLM + features + server
```

## Quickstart

Local/offline example (requires Ollama running with `ollama pull qwen3:4b`
already done):

```python
from abstractcore import create_llm

llm = create_llm("ollama", model="qwen3:4b")
response = llm.generate("Draft a privacy-preserving onboarding checklist.")
print(response.content)
```

Remote API example (requires `pip install "abstractcore[openai]"`):

```python
from abstractcore import create_llm

llm = create_llm("openai", model="gpt-4o-mini")
response = llm.generate("What is the capital of France?")
print(response.content)
```

### Conversation state (`BasicSession`)

```python
from abstractcore import create_llm, BasicSession

session = BasicSession(create_llm("anthropic", model="claude-haiku-4-5"))
print(session.generate("Give me 3 bakery name ideas.").content)
print(session.generate("Pick the best one and explain why.").content)
```

### Streaming

```python
from abstractcore import create_llm

llm = create_llm("ollama", model="qwen3:4b")
for chunk in llm.generate("Write a short poem about distributed systems.", stream=True):
    print(chunk.content or "", end="", flush=True)
```

### Async

```python
import asyncio
from abstractcore import create_llm

async def main():
    llm = create_llm("openai", model="gpt-4o-mini")
    resp = await llm.agenerate("Give me 5 bullet points about HTTP caching.")
    print(resp.content)

asyncio.run(main())
```

## Token budgets (unified)

```python
from abstractcore import create_llm

llm = create_llm(
    "openai",
    model="gpt-4o-mini",
    max_tokens=8000,        # total budget (input + output)
    max_output_tokens=1200, # output cap
)
```

## Providers (common)

Open-source-first: local providers (Ollama, LMStudio, vLLM, openai-compatible, HuggingFace, MLX) are first-class. Cloud and gateway providers are optional.

- `openai`: `OPENAI_API_KEY`, optional `OPENAI_BASE_URL`
- `anthropic`: `ANTHROPIC_API_KEY`, optional `ANTHROPIC_BASE_URL`
- `openrouter`: `OPENROUTER_API_KEY`, optional `OPENROUTER_BASE_URL` (default: `https://openrouter.ai/api/v1`)
- `portkey`: `PORTKEY_API_KEY`, `PORTKEY_CONFIG` (config id), optional `PORTKEY_BASE_URL` (default: `https://api.portkey.ai/v1`)
- `ollama`: local server at `OLLAMA_BASE_URL` (or legacy `OLLAMA_HOST`)
- `lmstudio`: OpenAI-compatible local server at `LMSTUDIO_BASE_URL` (default: `http://localhost:1234/v1`)
- `vllm`: OpenAI-compatible server at `VLLM_BASE_URL` (default: `http://localhost:8000/v1`)
- `openai-compatible`: generic OpenAI-compatible endpoints via `OPENAI_COMPATIBLE_BASE_URL` (default: `http://localhost:1234/v1`)
- `huggingface`: local models via Transformers (optional `HUGGINGFACE_TOKEN` for gated downloads)
- `mlx`: Apple Silicon local models (optional `HUGGINGFACE_TOKEN` for gated downloads)

You can also persist settings (including API keys) via the config CLI:
- `abstractcore --status`
- `abstractcore --configure` (alias: `--config`)
- `abstractcore --set-api-key openai sk-...`

## What’s inside (quick tour)

- Tools: universal tool calling across providers → [Tool Calling](docs/tool-calling.md)
- Built-in tools (optional): web + filesystem helpers (`skim_websearch`, `skim_url`, `fetch_url`, `read_file`, …) → [Tool Calling](docs/tool-calling.md)
- Tool syntax rewriting: `tool_call_tags` (Python) and `agent_format` (server) → [Tool Syntax Rewriting](docs/tool-syntax-rewriting.md)
- Structured output: Pydantic-first with provider-aware strategies → [Structured Output](docs/structured-output.md)
- Media input: images/audio/video + documents (policies + fallbacks) → [Media Handling](docs/media-handling-system.md) and [Vision Capabilities](docs/vision-capabilities.md)
- Capability plugins (optional): deterministic `llm.voice/llm.audio/llm.vision` surfaces → [Capabilities](docs/capabilities.md)
- Glyph visual-text compression: scale long-context document analysis via VLMs → [Glyph Visual-Text Compression](docs/glyphs.md)
- Embeddings and semantic search → [Embeddings](docs/embeddings.md)
- Observability: global event bus + interaction traces → [Architecture](docs/architecture.md), [API Reference (Events)](docs/api-reference.md#eventtype), [Interaction Tracing](docs/interaction-tracing.md)
- MCP (Model Context Protocol): discover tools from MCP servers (HTTP/stdio) → [MCP](docs/mcp.md)
- OpenAI-compatible server: one `/v1` gateway for chat + optional `/v1/images/*` and `/v1/audio/*` endpoints → [Server](docs/server.md)

## Tool calling (passthrough by default)

By default (`execute_tools=False`), AbstractCore:
- returns clean assistant text in `response.content`
- returns structured tool calls in `response.tool_calls` (host/runtime executes them)

```python
from abstractcore import create_llm, tool

@tool
def get_weather(city: str) -> str:
    return f"{city}: 22°C and sunny"

llm = create_llm("openai", model="gpt-4o-mini")
resp = llm.generate("What's the weather in Paris? Use the tool.", tools=[get_weather])

print(resp.content)
print(resp.tool_calls)
```

If you need tool-call markup preserved/re-written in `content` for downstream parsers, pass
`tool_call_tags=...` (e.g. `"qwen3"`, `"llama3"`, `"xml"`). See [Tool Syntax Rewriting](docs/tool-syntax-rewriting.md).

## Structured output

```python
from pydantic import BaseModel
from abstractcore import create_llm

class Answer(BaseModel):
    title: str
    bullets: list[str]

llm = create_llm("openai", model="gpt-4o-mini")
answer = llm.generate("Summarize HTTP/3 in 3 bullets.", response_model=Answer)
print(answer.bullets)
```

## Media input (images/audio/video)

Requires `pip install "abstractcore[media]"`.

```python
from abstractcore import create_llm

llm = create_llm("anthropic", model="claude-haiku-4-5")
resp = llm.generate("Describe the image.", media=["./image.png"])
print(resp.content)
```

Notes:
- **Images**: use a vision-capable model, or configure **vision fallback** for text-only models (`abstractcore --config`; `abstractcore --set-vision-provider PROVIDER MODEL`).
- **Video**: `video_policy="auto"` (default) uses native video when supported, otherwise samples frames (requires `ffmpeg`/`ffprobe`) and routes them through image/vision handling (so you still need a vision-capable model or vision fallback configured).
- **Audio**: use an audio-capable model, or set `audio_policy="auto"`/`"speech_to_text"` and install `abstractvoice` for speech-to-text.

Configure defaults (optional):

```bash
abstractcore --status
abstractcore --set-vision-provider lmstudio qwen/qwen3-vl-4b
abstractcore --set-audio-strategy auto
abstractcore --set-video-strategy auto
```

See [Media Handling](docs/media-handling-system.md) and [Vision Capabilities](docs/vision-capabilities.md).

## HTTP server (OpenAI-compatible gateway)

```bash
pip install "abstractcore[server]"
python -m abstractcore.server.app
```

Use any OpenAI-compatible client, and route to any provider/model via `model="provider/model"`:

```python
from openai import OpenAI

client = OpenAI(base_url="http://localhost:8000/v1", api_key="unused")
resp = client.chat.completions.create(
    model="ollama/qwen3:4b",
    messages=[{"role": "user", "content": "Hello from the gateway!"}],
)
print(resp.choices[0].message.content)
```

See [Server](docs/server.md).

Single-model `/v1` endpoint (one provider/model per worker): see [Endpoint](docs/endpoint.md) (`abstractcore-endpoint`).

## CLI (optional)

Interactive chat:

```bash
abstractcore-chat --provider openai --model gpt-4o-mini
abstractcore-chat --provider lmstudio --model qwen/qwen3-4b-2507 --base-url http://localhost:1234/v1
abstractcore-chat --provider openrouter --model openai/gpt-4o-mini
```

Token limits:
- startup: `abstractcore-chat --max-tokens 8192 --max-output-tokens 1024 ...`
- in-REPL: `/max-tokens 8192` and `/max-output-tokens 1024`

## Built-in CLI apps

AbstractCore also ships with ready-to-use CLI apps:
- `summarizer`, `extractor`, `judge`, `intent`, `deepsearch` (see [docs/apps/](docs/apps/))

## Documentation map

Start here:
- [Docs Index](docs/README.md) — navigation for all docs
- [Prerequisites](docs/prerequisites.md) — provider setup (keys, local servers, hardware notes)
- [Getting Started](docs/getting-started.md) — first call + core concepts
- [FAQ](docs/faq.md) — common questions and setup gotchas
- [Examples](docs/examples.md) — end-to-end patterns and recipes
- [Framework Comparison](docs/comparison.md) — where AbstractCore and AbstractFramework fit next to LiteLLM, LangChain, LangGraph, and LlamaIndex
- [Troubleshooting](docs/troubleshooting.md) — common failures and fixes

Core features:
- [Tool Calling](docs/tool-calling.md) — universal tools across providers (native + prompted)
- [Tool Syntax Rewriting](docs/tool-syntax-rewriting.md) — rewrite tool-call syntax for different runtimes/clients
- [Structured Output](docs/structured-output.md) — schema enforcement + retry strategies
- [Media Handling](docs/media-handling-system.md) — images/audio/video + documents (policies + fallbacks)
- [Vision Capabilities](docs/vision-capabilities.md) — image/video input, vision fallback, and how this differs from generative vision
- [Glyph Visual-Text Compression](docs/glyphs.md) — compress long documents into images for VLMs
- [Generation Parameters](docs/generation-parameters.md) — unified parameter vocabulary and provider quirks
- [Session Management](docs/session.md) — conversation history, persistence, and compaction
- [Embeddings](docs/embeddings.md) — embeddings API and RAG building blocks
- [Async Guide](docs/async-guide.md) — async patterns, concurrency, best practices
- [Centralized Config](docs/centralized-config.md) — `~/.abstractcore/config/abstractcore.json` + CLI config commands
- [Capabilities](docs/capabilities.md) — supported features and current limitations
- [Interaction Tracing](docs/interaction-tracing.md) — inspect prompts/responses/usage for observability
- [MCP](docs/mcp.md) — consume MCP tool servers (HTTP/stdio) as tool sources

Reference and internals:
- [Architecture](docs/architecture.md) — system overview + event system
- [API (Python)](docs/api.md) — how to use the public API
- [API Reference](docs/api-reference.md) — Python API (including events)
- [Server](docs/server.md) — OpenAI-compatible gateway with tool/media support
- [CLI Guide](docs/acore-cli.md) — interactive `abstractcore-chat` walkthrough

Project:
- [Changelog](CHANGELOG.md) — version history and upgrade notes
- [Contributing](CONTRIBUTING.md) — dev setup and contribution guidelines
- [Security](SECURITY.md) — responsible vulnerability reporting
- [Acknowledgements](ACKNOWLEDGEMENTS.md) — upstream projects and communities

## License

MIT
