Metadata-Version: 2.3
Name: llmloader
Version: 0.1.9
Summary: Loads a Langchain LLM by model name as a string.
License: Apache-2.0
Author: Robert Turnbull
Author-email: robert.turnbull@unimelb.edu.au
Requires-Python: >=3.10,<4.0
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Provides-Extra: llama
Requires-Dist: accelerate (>=0.31.0) ; extra == "llama"
Requires-Dist: bitsandbytes (>=0.43.1) ; extra == "llama"
Requires-Dist: langchain (>=0.2.1)
Requires-Dist: langchain-anthropic (>=0.2.4)
Requires-Dist: langchain-azure-ai (>=1.0.4,<2.0.0)
Requires-Dist: langchain-community (>=0.3.12)
Requires-Dist: langchain-google-genai (>=4.0.0,<5.0.0)
Requires-Dist: langchain-huggingface (>=0.1.2) ; extra == "llama"
Requires-Dist: langchain-mistralai (>=0.2.9)
Requires-Dist: langchain-openai (>=0.1.8)
Requires-Dist: langchain-xai (>=0.2.1)
Requires-Dist: torch (>=2.2) ; extra == "llama"
Requires-Dist: transformers (>=4.41.2) ; extra == "llama"
Requires-Dist: typer (>=0.12.3)
Project-URL: Documentation, https://github.com/rbturnbull/llmloader
Project-URL: Repository, https://github.com/rbturnbull/llmloader
Description-Content-Type: text/x-rst

=========
llmloader
=========

.. start-badges

|pypi| |testing badge| |black badge|

.. |pypi| image:: https://img.shields.io/pypi/v/llmloader?color=blue
   :target: https://pypi.org/project/llmloader/

.. |testing badge| image:: https://github.com/rbturnbull/llmloader/actions/workflows/testing.yml/badge.svg
    :target: https://github.com/rbturnbull/llmloader/actions

.. |black badge| image:: https://img.shields.io/badge/code%20style-black-000000.svg
    :target: https://github.com/psf/black
    
.. end-badges   

Loads a Langchain LLM by model name as a string.

Installation
============

.. code-block:: bash

    pip install llmloader

Or install from GitHub directly:

.. code-block:: bash

    pip install git+https://github.com/rbturnbull/llmloader.git


Usage
==========

Load the LLM with the `llmloader.load` function. e.g.

.. code-block:: python

    import llmloader

    llm = llmloader.load("gpt-4o")
    result = llm.invoke("Write me a haiku about love")

    llm = llmloader.load("claude-3-5-sonnet-20240620")
    result = llm.invoke("Write me a haiku about love")

    llm = llmloader.load("grok-4-latest")
    result = llm.invoke("Write me a haiku about love")

    llm = llmloader.load("mistral-small-latest")
    result = llm.invoke("Write me a haiku about love")

    llm = llmloader.load("meta-llama/Llama-3.3-70B-Instruct")
    result = llm.invoke("Write me a haiku about love")

CLI
==========

You can test out prompts and models on the command line. Make sure you have your API keys set in your environment or add the key with the ``--api-key`` flag.

.. code-block:: bash
    
    llmloader "Write me a haiku about love" --model gpt-5-mini
    llmloader "Write me a haiku about love" --model gpt-5.2
    llmloader "Write me a haiku about love" --model claude-sonnet-4-5-20250929
    llmloader "Write me a haiku about love" --model grok-4-latest
    llmloader "Write me a haiku about love" --model mistral-small-latest
    llmloader "Write me a haiku about love" --model gemini-3-pro-preview
    # Using OpenRouter
    llmloader "Write me a haiku about love" --model openai/gpt-5-mini
    # Local deployment models
    llmloader "Write me a haiku about love" --model meta-llama/Meta-Llama-3-8B-Instruct
    llmloader "Write me a haiku about love" --model meta-llama/Llama-3.3-70B-Instruct
    llmloader --help

Environment Variables
======================

You can pass an API key for the model provider using the command line flag ``--api-key``, kwarg ``api_key=...``, or by setting the appropriate environment variable as described below.

================= =========================
Model Provider    Environment Variable
================= =========================
OpenAI            OPENAI_API_KEY
Anthropic         ANTHROPIC_API_KEY
Mistral           MISTRAL_API_KEY
XAI               XAI_API_KEY
OpenRouter        OPENROUTER_API_KEY
Google            GOOGLE_API_KEY
================= =========================

Azure
------------
To use custom models deployed with Azure OpenAI, you need to set the following environment variables:

- ``CUSTOM_API_KEY``: Your Azure API key.
- ``CUSTOM_ENDPOINT``: The endpoint URL for your Azure AI service.

``--model`` should match the deployment name in your Azure AI resource.

Note: 

- If ``llmloader`` detects the ``OPENAI_API_KEY`` environment variable, it will use the OpenAI API by default if a valid model name is provided and ``CUSTOM_ENDPOINT`` is not set.
- If both ``CUSTOM_API_KEY`` and ``CUSTOM_ENDPOINT`` are set, llmloader will use the Azure service.
- ``CUSTOM_ENDPOINT`` should be the URL ending with /models, e.g. ``https://your-resource-name.openai.azure.com/models``

Testing
========================

Endpoint Manual Testing
--------------------------

``test_manual.py`` contains tests for models that require API keys. You can run these tests manually after setting the appropriate environment variables.

Once the environment variables are set, you can run the tests with:

.. code-block:: bash

    pytest -m manual

To specify a particular test, use:

.. code-block:: bash

    pytest -m manual tests/test_manual.py::test_name

Credit
==========

- `Robert Turnbull <https://robturnbull.com>`_  (Melbourne Data Analytics Platform, University of Melbourne)
- `James Quang <https://www.linkedin.com/in/jamesquang>`_  (Melbourne Data Analytics Platform, University of Melbourne)

