Metadata-Version: 2.1
Name: aihandler
Version: 1.11.2
Summary: AI Handler: An engine which wraps certain huggingface models
Home-page: https://github.com/Capsize-Games/aihandler
Author: Capsize LLC
Author-email: contact@capsize.gg
License: AGPL-3.0
Keywords: ai,chatbot,chat,ai
Requires-Python: >=3.10.0
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: einops (==0.6.0)
Requires-Dist: ninja (==1.11.1)
Requires-Dist: JIT (==0.2.7)
Requires-Dist: tqdm (==4.65.0)
Requires-Dist: xformers (==0.0.19)
Requires-Dist: omegaconf (==2.3.0)
Requires-Dist: accelerate (==0.19.0)
Requires-Dist: controlnet-aux (==0.0.3)
Requires-Dist: huggingface-hub (==0.14.1)
Requires-Dist: numpy (==1.23.5)
Requires-Dist: Pillow (==9.5.0)
Requires-Dist: pip (==23.1.2)
Requires-Dist: PyQt6 (==6.4.2)
Requires-Dist: PyQt6-Qt6 (==6.4.3)
Requires-Dist: PyQt6-sip (==13.4.1)
Requires-Dist: pyqtdarktheme (==2.1.0)
Requires-Dist: pyre-extensions (==0.0.29)
Requires-Dist: lightning (==2.0.2)
Requires-Dist: requests (==2.30.0)
Requires-Dist: requests-oauthlib (==1.3.1)
Requires-Dist: safetensors (==0.3.1)
Requires-Dist: scipy (==1.10.1)
Requires-Dist: tensorflow (==2.12.0)
Requires-Dist: tokenizers (==0.13.3)
Requires-Dist: charset-normalizer (==3.1.0)
Requires-Dist: opencv-python (==4.7.0.72)
Requires-Dist: setuptools (==67.7.2)
Requires-Dist: sympy (==1.12.0)
Requires-Dist: typing-extensions (==4.5.0)
Requires-Dist: urllib3 (==1.26.15)
Requires-Dist: diffusers (==0.16.1)
Requires-Dist: transformers (==4.29.1)
Requires-Dist: compel (==1.1.5)
Requires-Dist: regex

# AI Handler
[![Upload Python Package](https://github.com/Capsize-Games/aihandler/actions/workflows/python-publish.yml/badge.svg)](https://github.com/Capsize-Games/aihandler/actions/workflows/python-publish.yml)
[![Discord](https://img.shields.io/discord/839511291466219541?color=5865F2&logo=discord&logoColor=white)](https://discord.gg/PUVDDCJ7gz)
![GitHub](https://img.shields.io/github/license/Capsize-Games/aihandler)
![GitHub last commit](https://img.shields.io/github/last-commit/Capsize-Games/aihandler)
![GitHub issues](https://img.shields.io/github/issues/Capsize-Games/aihandler)
![GitHub closed issues](https://img.shields.io/github/issues-closed/Capsize-Games/aihandler)
![GitHub pull requests](https://img.shields.io/github/issues-pr/Capsize-Games/aihandler)
![GitHub closed pull requests](https://img.shields.io/github/issues-pr-closed/Capsize-Games/aihandler)

This is a simple framework for running AI models. It makes use of the huggingface API
which gives you a queue, threading, a simple API, and the ability to run Stable Diffusion and LLMs seamlessly
from your local hardware.

This is not intended to be used as a standalone application.

It can easily be extended and used to power interfaces or it can be run from the command line.

AI Handler is a work in progress. It powers two projects at the moment, but may not be ready for general use.

## Installation

This is a work in progress.

## Pre-requisites

System requirements

- Windows 10+
- Python 3.10.8
- pip 23.0.1
- CUDA toolkit 11.7
- CUDNN 8.6.0.163
- Cuda capable GPU
- 16gb+ ram

[For Windows, follow windows branch instructions](https://github.com/Capsize-Games/aihandler/tree/develop-windows)

Install
```
pip install https://github.com/w4ffl35/diffusers/archive/refs/tags/v0.15.0.ckpt_fix_0.0.1.tar.gz
pip install aihandler
```

#### Optional

These are optional instructions for installing TensorRT and Deepspeed for Windows

##### Install Tensor RT:

1. Download TensorRT-8.4.3.1.Windows10.x86_64.cuda-11.6.cudnn8.4
2. Git clone TensorRT 8.4.3.1
3. Follow their instructions to build TensorRT-8.4.3.1 python wheel
4. Install TensorRT `pip install tensorrt-*.whl`
 
##### Install Deepspeed:

1. Git clone Deepspeed 0.8.1
2. Follow their instructions to build Deepspeed python wheel
3. Install Deepspeed `pip install deepspeed-*.whl

---

## Environment variables

- `AIRUNNER_ENVIRONMENT` - `dev` or `prod`. Defaults to `dev`. This controls the LOG_LEVEL
- `LOG_LEVEL` - `FATAL` for production, `DEBUG` for development. Override this to force a log level

### Huggingface variables

#### Offline mode

These environment variables keep you offline until you need to download a model. This prevents unwanted online access and speeds up usage of huggingface libraries.

- `DISABLE_TELEMETRY` Keep this set to 1 at all times. Huggingface collects minimal telemetry when downloading a model from their repository but this will keep it disabled. [See more info in this github thread](https://github.com/huggingface/diffusers/pull/1833#issuecomment-1368484414)
- `HF_HUB_OFFLINE` When loading a diffusers model, huggingface libraries will attempt to download an updated cache before running the model. This prevents that check from happening (long with a boolean passed to `load_pretrained` see the runner.py file for examples)
- `TRANSFORMERS_OFFLINE` Similar to `HF_HUB_OFFLINE` but for transformers models
