Metadata-Version: 2.1
Name: pyllmsearch
Version: 0.7.0
Summary: LLM Powered Advanced RAG Application
Project-URL: Homepage, https://github.com/snexus/llm-search
Project-URL: Documentation, https://llm-search.readthedocs.io/en/latest/
Keywords: llm,rag,retrieval-augemented-generation,large-language-models,local,splade,hyde,reranking,chroma,openai
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: llama-cpp-python
Requires-Dist: chromadb~=0.4
Requires-Dist: langchain~=0.1
Requires-Dist: pydantic~=2.5
Requires-Dist: transformers~=4.36
Requires-Dist: sentence-transformers==2.2.2
Requires-Dist: pypdf2~=3.0.1
Requires-Dist: ebooklib==0.18
Requires-Dist: sentencepiece==0.1.99
Requires-Dist: setuptools==67.7.2
Requires-Dist: loguru
Requires-Dist: python-dotenv
Requires-Dist: accelerate~=0.22.0
Requires-Dist: protobuf==3.20.2
Requires-Dist: termcolor
Requires-Dist: openai~=1.8
Requires-Dist: einops
Requires-Dist: click
Requires-Dist: bitsandbytes==0.42.0
Requires-Dist: InstructorEmbedding==1.0.1
Requires-Dist: unstructured~=0.12.4
Requires-Dist: pymupdf==1.22.5
Requires-Dist: streamlit~=1.28
Requires-Dist: python-docx~=1.1
Requires-Dist: six==1.16.0; python_version >= "3.10" and python_version < "4.0"
Requires-Dist: sniffio==1.3.0; python_version >= "3.10" and python_version < "4.0"
Requires-Dist: sqlalchemy==1.4.48; python_version >= "3.10" and python_version < "4.0"
Requires-Dist: starlette==0.27.0; python_version >= "3.10" and python_version < "4.0"
Requires-Dist: sympy==1.11.1; python_version >= "3.10" and python_version < "4.0"
Requires-Dist: tenacity==8.2.3; python_version >= "3.10" and python_version < "4.0"
Requires-Dist: threadpoolctl==3.1.0; python_version >= "3.10" and python_version < "4.0"
Requires-Dist: tiktoken==0.3.3; python_version >= "3.10" and python_version < "4.0"
Requires-Dist: tokenizers==0.15.0; python_version >= "3.10" and python_version < "4.0"
Requires-Dist: tqdm==4.65.0; python_version >= "3.10" and python_version < "4.0"
Provides-Extra: dev
Requires-Dist: black; extra == "dev"
Requires-Dist: pytest; extra == "dev"
Requires-Dist: pytest-cov; extra == "dev"
Requires-Dist: ruff; extra == "dev"
Requires-Dist: autodoc_pydantic; extra == "dev"
Requires-Dist: sphinx; extra == "dev"
Requires-Dist: sphinx-markdown-builder; extra == "dev"
Requires-Dist: sphinx_rtd_theme; extra == "dev"

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/snexus/llm-search/blob/main/notebooks/llmsearch_google_colab_demo.ipynb)

# pyLLMSeach - Advanced RAG, ready to use

The purpose of this package is to offer a convenient question-answering (RAG) system with a simple YAML-based configuration that enables interaction with multiple collections of local documents. Special attention is given to improvements in various components of the system **in addition to basic LLN based RAGs** - better document parsing, hybrid search, HyDE enabled search, chat history, deep linking, re-ranking, the ability to customize embeddings, and more. The package is designed to work with custom Large Language Models (LLMs) – whether from OpenAI or installed locally.

## Features

* Supported formats
    * Build-in parsers:
        * `.md` - Divides files based on logical components such as headings, subheadings, and code blocks. Supports additional features like cleaning image links, adding custom metadata, and more.
        * `.pdf` - MuPDF-based parser.
        * `.docx` - custom parser, supports nested tables.
    * Other common formats are supported by `Unstructured` pre-processor:
        * List of formats https://unstructured-io.github.io/unstructured/core/partition.html

* Supports multiple collection of documents, and filtering the results by a collection.

* An ability to update the embeddings incrementally, without a need to re-index the entire document base.

* Generates dense embeddings from a folder of documents and stores them in a vector database (ChromaDB).
  * The following embedding models are supported:
    * Huggingface embeddings.
    * Sentence-transformers-based models, e.g., `multilingual-e5-base`.
    * Instructor-based models, e.g., `instructor-large`.

* Generates sparse embeddings using SPLADE (https://github.com/naver/splade) to enable hybrid search (sparse + dense).

* Supports the "Retrieve and Re-rank" strategy for semantic search, see - https://www.sbert.net/examples/applications/retrieve_rerank/README.html.
    * Besides the originally `ms-marco-MiniLM` cross-encoder, more modern `bge-reranker` is supported.

* Supports HyDE (Hypothetical Document Embeddings) - https://arxiv.org/pdf/2212.10496.pdf
    * WARNING: Enabling HyDE (via config OR webapp) can significantly alter the quality of the results. Please make sure to read the paper before enabling.
    * From my own experiments, enabling HyDE significantly boosts quality of the output on a topics where user can't formulate the quesiton using domain specific language of the topic - e.g. when learning new topics.

* Support for multi-querying, inspired by `RAG Fusion` - https://towardsdatascience.com/forget-rag-the-future-is-rag-fusion-1147298d8ad1
    * When multi-querying is turned on (either config or webapp), the original query will be replaced by 3 variants of the same query, allowing to bridge the gap in the terminology and "offer different angles or perspectives" according to the article.

* Supprts optional chat history with question contextualization

* Allows interaction with embedded documents, internally supporting the following models and methods (including locally hosted):
    * OpenAI models (ChatGPT 3.5/4 and Azure OpenAI).
    * HuggingFace models.
    * Llama cpp supported models - for full list see https://github.com/ggerganov/llama.cpp#description
    * AutoGPTQ models (temporarily disabled due to broken dependencies).

* Interoperability with LiteLLM + Ollama via OpenAI API, supporting hundreds of different models (see [Model configuration for LiteLLM](sample_templates/llm/litellm.yaml))

* Other features
    * Simple CLI and web interfaces.
    * Deep linking into document sections - jump to an individual PDF page or a header in a markdown file.
    * Ability to save responses to an offline database for future analysis.
    * Experimental API


## Demo

![Demo](media/llmsearch-demo-v2.gif)


## Documentation

[Browse Documentation](https://llm-search.readthedocs.io/en/latest/)


