Metadata-Version: 2.1
Name: chatdocs
Version: 0.2.2
Summary: Chat with your documents offline using AI.
Home-page: https://github.com/marella/chatdocs
Author: Ravindra Marella
Author-email: mv.ravindra007@gmail.com
License: MIT
Keywords: chatdocs ctransformers transformers langchain chroma ai llm
Platform: UNKNOWN
Classifier: Development Status :: 1 - Planning
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Education
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Scientific/Engineering
Classifier: Topic :: Scientific/Engineering :: Mathematics
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development
Classifier: Topic :: Software Development :: Libraries
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Description-Content-Type: text/markdown
Requires-Dist: InstructorEmbedding (<2.0.0,>=1.0.1)
Requires-Dist: argilla (==1.8.0)
Requires-Dist: chromadb (<0.4.0,>=0.3.0)
Requires-Dist: ctransformers (<0.3.0,>=0.2.5)
Requires-Dist: deepmerge (<2.0.0,>=1.1.0)
Requires-Dist: extract-msg (<0.42.0,>=0.41.0)
Requires-Dist: langchain (>=0.0.181)
Requires-Dist: pandoc (<3.0.0,>=2.3)
Requires-Dist: pdfminer.six (==20221105)
Requires-Dist: pypandoc (<2.0.0,>=1.11)
Requires-Dist: pyyaml (>=6.0)
Requires-Dist: quart (<0.19.0,>=0.18.3)
Requires-Dist: sentence-transformers (<3.0.0,>=2.2.2)
Requires-Dist: tqdm (<5.0.0,>=4.64.1)
Requires-Dist: typer (>=0.9.0)
Requires-Dist: typing-extensions (<5.0.0,>=4.4.0)
Requires-Dist: unstructured (<0.7.0,>=0.6.0)
Provides-Extra: gptq
Requires-Dist: auto-gptq (<0.3.0,>=0.2.1) ; extra == 'gptq'
Provides-Extra: tests
Requires-Dist: pytest ; extra == 'tests'

# [ChatDocs](https://github.com/marella/chatdocs) [![PyPI](https://img.shields.io/pypi/v/chatdocs)](https://pypi.org/project/chatdocs/) [![tests](https://github.com/marella/chatdocs/actions/workflows/tests.yml/badge.svg)](https://github.com/marella/chatdocs/actions/workflows/tests.yml)

Chat with your documents offline using AI. No data leaves your system. Internet connection is only required to install the tool and download the AI models. It is based on [PrivateGPT](https://github.com/imartinez/privateGPT) but has more features.

![Web UI](https://github.com/marella/chatdocs/raw/main/docs/demo.png)

- [Features](#features)
- [Installation](#installation)
- [Usage](#usage)
- [Configuration](#configuration)
- [GPU](#gpu)

## Features

- Supports GGML models via [C Transformers](https://github.com/marella/ctransformers)
- Supports 🤗 Transformers models
- Supports GPTQ models
- Web UI
- GPU support
- Highly configurable via `chatdocs.yml`

<details>
<summary><strong>Show supported document types</strong></summary><br>

| Extension       | Format                         |
| :-------------- | :----------------------------- |
| `.csv`          | CSV                            |
| `.docx`, `.doc` | Word Document                  |
| `.enex`         | EverNote                       |
| `.eml`          | Email                          |
| `.epub`         | EPub                           |
| `.html`         | HTML                           |
| `.md`           | Markdown                       |
| `.msg`          | Outlook Message                |
| `.odt`          | Open Document Text             |
| `.pdf`          | Portable Document Format (PDF) |
| `.pptx`, `.ppt` | PowerPoint Document            |
| `.txt`          | Text file (UTF-8)              |

</details>

## Installation

Install the tool using:

```sh
pip install chatdocs
```

Download the AI models using:

```sh
chatdocs download
```

Now it can be run offline without internet connection.

## Usage

Add a directory containing documents to chat with using:

```sh
chatdocs add /path/to/documents
```

> The processed documents will be stored in `db` directory by default.

Chat with your documents using:

```sh
chatdocs ui
```

Open http://localhost:5000 in your browser to access the web UI.

It also has a nice command-line interface:

```sh
chatdocs chat
```

<details>
<summary><strong>Show preview</strong></summary><br>

![Demo](https://github.com/marella/chatdocs/raw/main/docs/cli.png)

</details>

## Configuration

All the configuration options can be changed using the `chatdocs.yml` config file. Create a `chatdocs.yml` file in some directory and run all commands from that directory. For reference, see the default [`chatdocs.yml`](https://github.com/marella/chatdocs/blob/main/chatdocs/data/chatdocs.yml) file.

You don't have to copy the entire file, just add the config options you want to change as it will be merged with the default config. For example, see [`tests/fixtures/chatdocs.yml`](https://github.com/marella/chatdocs/blob/main/tests/fixtures/chatdocs.yml) which changes only some of the config options.

### Embeddings

To change the embeddings model, add and change the following in your `chatdocs.yml`:

```yml
embeddings:
  model: hkunlp/instructor-large
```

> **Note:** When you change the embeddings model, delete the `db` directory and add documents again.

### C Transformers

To change the C Transformers GGML model, add and change the following in your `chatdocs.yml`:

```yml
ctransformers:
  model: TheBloke/Wizard-Vicuna-7B-Uncensored-GGML
  model_type: llama
```

> **Note:** When you add a new model for the first time, run `chatdocs download` to download the model before using it.

You can also use an existing local model file:

```yml
ctransformers:
  model: /path/to/ggml-model.bin
  model_type: llama
```

### 🤗 Transformers

To use 🤗 Transformers models, add the following to your `chatdocs.yml`:

```yml
llm: huggingface
```

To change the 🤗 Transformers model, add and change the following in your `chatdocs.yml`:

```yml
huggingface:
  model: TheBloke/Wizard-Vicuna-7B-Uncensored-HF
```

> **Note:** When you add a new model for the first time, run `chatdocs download` to download the model before using it.

### GPTQ

To use GPTQ models, install the `auto-gptq` package using:

```sh
pip install git+https://github.com/PanQiWei/AutoGPTQ@v0.2.1
```

and add the following to your `chatdocs.yml`:

```yml
llm: gptq
```

To change the GPTQ model, add and change the following in your `chatdocs.yml`:

```yml
gptq:
  model: TheBloke/Wizard-Vicuna-7B-Uncensored-GPTQ
  model_file: Wizard-Vicuna-7B-Uncensored-GPTQ-4bit-128g.no-act-order.safetensors
```

> **Note:** When you add a new model for the first time, run `chatdocs download` to download the model before using it.

## GPU

### Embeddings

To enable GPU (CUDA) support for the embeddings model, add the following to your `chatdocs.yml`:

```yml
embeddings:
  model_kwargs:
    device: cuda
```

You may have to reinstall PyTorch with CUDA enabled by following the instructions [here](https://pytorch.org/get-started/locally/).

### C Transformers

> **Note:** Currently only LLaMA GGML models have GPU support.

To enable GPU (CUDA) support for the C Transformers GGML model, add the following to your `chatdocs.yml`:

```yml
ctransformers:
  config:
    gpu_layers: 50
```

You should also reinstall the `ctransformers` package with CUDA enabled:

```sh
pip uninstall ctransformers --yes
CT_CUBLAS=1 pip install ctransformers --no-binary ctransformers
```

<details>
<summary><strong>Show commands for Windows</strong></summary><br>

On Windows PowerShell run:

```sh
$env:CT_CUBLAS=1
pip uninstall ctransformers --yes
pip install ctransformers --no-binary ctransformers
```

On Windows Command Prompt run:

```sh
set CT_CUBLAS=1
pip uninstall ctransformers --yes
pip install ctransformers --no-binary ctransformers
```

</details>

### 🤗 Transformers

To enable GPU (CUDA) support for the 🤗 Transformers model, add the following to your `chatdocs.yml`:

```yml
huggingface:
  device: 0
```

You may have to reinstall PyTorch with CUDA enabled by following the instructions [here](https://pytorch.org/get-started/locally/).

### GPTQ

To enable GPU (CUDA) support for the GPTQ model, add the following to your `chatdocs.yml`:

```yml
gptq:
  device: 0
```

You may have to reinstall PyTorch with CUDA enabled by following the instructions [here](https://pytorch.org/get-started/locally/).

After installing PyTorch with CUDA enabled, you should also reinstall the `auto-gptq` package:

```sh
pip uninstall auto-gptq --yes
pip install git+https://github.com/PanQiWei/AutoGPTQ@v0.2.1
```

## License

[MIT](https://github.com/marella/chatdocs/blob/main/LICENSE)


