Metadata-Version: 2.1
Name: gllm-inference-binary
Version: 0.4.12
Summary: A library containing components related to model inferences in Gen AI applications.
Author: Henry Wicaksono
Author-email: henry.wicaksono@gdplabs.id
Requires-Python: >=3.11,<3.14
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Provides-Extra: anthropic
Provides-Extra: google
Provides-Extra: google-genai
Provides-Extra: google-vertexai
Provides-Extra: huggingface
Provides-Extra: litellm
Provides-Extra: openai
Provides-Extra: twelvelabs
Provides-Extra: voyage
Requires-Dist: anthropic (>=0.49.0,<0.50.0) ; extra == "anthropic"
Requires-Dist: gllm-core-binary (>=0.3.0,<0.4.0)
Requires-Dist: google-genai (==1.20.0) ; extra == "google"
Requires-Dist: huggingface-hub (>=0.30.0,<0.31.0) ; extra == "huggingface"
Requires-Dist: jinja2 (>=3.1.4,<4.0.0)
Requires-Dist: jsonschema (>=4.24.0,<5.0.0)
Requires-Dist: langchain (>=0.3.0,<0.4.0)
Requires-Dist: langchain-google-genai (==2.0.8) ; extra == "google-genai"
Requires-Dist: langchain-google-vertexai (==2.0.21) ; extra == "google-vertexai"
Requires-Dist: langchain-openai (>=0.3.12,<0.4.0) ; extra == "openai"
Requires-Dist: langchain-voyageai (>=0.1.6,<0.2.0) ; (python_version < "3.13") and (extra == "voyage")
Requires-Dist: libmagic (>=1.0,<2.0) ; sys_platform == "win32"
Requires-Dist: litellm (>=1.69.2,<2.0.0) ; extra == "litellm"
Requires-Dist: openai (>=1.74.0,<2.0.0) ; extra == "openai"
Requires-Dist: pandas (>=2.2.3,<3.0.0)
Requires-Dist: poetry (>=2.1.3,<3.0.0)
Requires-Dist: protobuf (>=5.28.2,<6.0.0)
Requires-Dist: python-magic (>=0.4.27,<0.5.0)
Requires-Dist: python-magic-bin (>=0.4.14,<0.5.0) ; sys_platform == "win32"
Requires-Dist: sentencepiece (>=0.2.0,<0.3.0)
Requires-Dist: transformers (==4.52.4) ; extra == "huggingface"
Requires-Dist: twelvelabs (>=0.4.4,<0.5.0) ; extra == "twelvelabs"
Requires-Dist: voyageai (>=0.3.0,<0.4.0) ; (python_version < "3.13") and (extra == "voyage")
Description-Content-Type: text/markdown

# GLLM Inference

## Description

A library containing components related to model inferences in Gen AI applications.

## Installation

### Prerequisites
- Python 3.11+ - [Install here](https://www.python.org/downloads/)
- Pip (if using Pip) - [Install here](https://pip.pypa.io/en/stable/installation/)
- Poetry 1.8.1+ (if using Poetry) - [Install here](https://python-poetry.org/docs/#installation)
- Git (if using Git) - [Install here](https://git-scm.com/downloads)
- For git installation:
  - Access to the [GDP Labs SDK github repository](https://github.com/GDP-ADMIN/gen-ai-internal)

### 1. Installation from Artifact Registry
Choose one of the following methods to install the package:

#### Using pip
```bash
pip install gllm-inference-binary
```

#### Using Poetry
```bash
poetry add gllm-inference-binary
```

### 2. Development Installation (Git)
For development purposes, you can install directly from the Git repository:
```bash
poetry add "git+ssh://git@github.com/GDP-ADMIN/gen-ai-internal.git#subdirectory=libs/gllm-inference"
```

Available extras:
- `anthropic`: Install Anthropic models dependencies
- `google-genai`: Install Google Generative AI models dependencies
- `google-vertexai`: Install Google Vertex AI models dependencies
- `huggingface`: Install HuggingFace models dependencies
- `openai`: Install OpenAI models dependencies
- `twelvelabs`: Install TwelveLabs models dependencies

## Managing Dependencies
1. Go to root folder of `gllm-inference` module, e.g. `cd libs/gllm-inference`.
2. Run `poetry shell` to create a virtual environment.
3. Run `poetry lock` to create a lock file if you haven't done it yet.
4. Run `poetry install` to install the `gllm-inference` requirements for the first time.
5. Run `poetry update` if you update any dependency module version at `pyproject.toml`.

## Contributing
Please refer to this [Python Style Guide](https://docs.google.com/document/d/1uRggCrHnVfDPBnG641FyQBwUwLoFw0kTzNqRm92vUwM/edit?usp=sharing)
to get information about code style, documentation standard, and SCA that you need to use when contributing to this project

1. Activate `pre-commit` hooks using `pre-commit install`
2. Run `poetry shell` to create a virtual environment.
3. Run `poetry lock` to create a lock file if you haven't done it yet.
4. Run `poetry install` to install the `gllm-inference` requirements for the first time.
5. Run `which python` to get the path to be referenced at Visual Studio Code interpreter path (`Ctrl`+`Shift`+`P` or `Cmd`+`Shift`+`P`)
6. Try running the unit test to see if it's working:
```bash
poetry run pytest -s tests/unit_tests/
```


