Metadata-Version: 2.4
Name: argo-proxy
Version: 2.5.1a0
Summary: Proxy server to Argo API, OpenAI format compatible
Author-email: Peng Ding <oaklight@gmx.com>
License-Expression: MIT
Project-URL: Documentation, https://github.com/Oaklight/argo-openai-proxy#readme
Project-URL: Repository, https://github.com/Oaklight/argo-openai-proxy
Project-URL: Issuses, https://github.com/Oaklight/argo-openai-proxy/issues
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Software Development
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: aiohttp>=3.12.2
Requires-Dist: loguru>=0.7.3
Requires-Dist: PyYAML>=6.0.2
Requires-Dist: sanic>=25.3.0
Requires-Dist: tiktoken>=0.9.0
Requires-Dist: setuptools<75
Provides-Extra: dev
Requires-Dist: dotenv>=0.9.9; extra == "dev"
Requires-Dist: openai>=1.79.0; extra == "dev"
Requires-Dist: mypy>=1.14.1; extra == "dev"
Requires-Dist: build>=1.2.2.post1; extra == "dev"
Requires-Dist: twine>=6.1.0; extra == "dev"
Requires-Dist: httpx>=0.28.1; extra == "dev"
Requires-Dist: requests>=2.25.1; extra == "dev"
Dynamic: license-file

# argo-openai-proxy

This project is a proxy application that forwards requests to an ARGO API and optionally converts the responses to be compatible with OpenAI's API format. It can be used in conjunction with [autossh-tunnel-dockerized](https://github.com/Oaklight/autossh-tunnel-dockerized) or other secure connection tools.

## TL;DR

```bash
pip install argo-proxy # install the package
argo-proxy # run the proxy
```

## NOTICE OF USAGE

The machine or server making API calls to Argo must be connected to the Argonne internal network or through a VPN on an Argonne-managed computer if you are working off-site. Your instance of the argo proxy should always be on-premise at an Argonne machine. The software is provided "as is," without any warranties. By using this software, you accept that the authors, contributors, and affiliated organizations will not be liable for any damages or issues arising from its use. You are solely responsible for ensuring the software meets your requirements.

- [Notice of Usage](#notice-of-usage)
- [Deployment](#deployment)
  - [Prerequisites](#prerequisites)
  - [Configuration File](#configuration-file)
  - [Running the Application](#running-the-application)
  - [First-Time Setup](#first-time-setup)
  - [Configuration Options Reference](#configuration-options-reference)
- [Usage](#usage)
  - [Endpoints](#endpoints)
    - [OpenAI Compatible](#openai-compatible)
    - [Not OpenAI Compatible](#not-openai-compatible)
    - [Timeout Override](#timeout-override)
  - [Models](#models)
    - [Chat Models](#chat-models)
    - [Embedding Models](#embedding-models)
  - [Examples](#examples)
    - [Chat Completion Example](#chat-completion-example)
    - [Embedding Example](#embedding-example)
    - [o1 Chat Example](#o1-chat-example)
    - [OpenAI Client Example](#openai-client-example)
- [Folder Structure](#folder-structure)
- [Bug Reports and Contributions](#bug-reports-and-contributions)

## Deployment

### Prerequisites

- **Python 3.10+** is required \
  recommend to use conda/mamba or pipx etc to manage exclusive environment \
  **Conda/Mamba** Download and install from: <https://conda-forge.org/download/>

- Install dependencies:

  ```bash
  pip install argo-proxy
  ```

  or, if you decide to use dev version (make sure you are at the root of the repo cloned):

  ```bash
  pip install .
  ```

### Configuration File

If you don't want to bother manually configure it, the [First-Time Setup](#first-time-setup) will automatically create it for you.

The application uses `config.yaml` for configuration. Here's an example:

```yaml
port: 44497
host: 0.0.0.0
argo_url: "https://apps-dev.inside.anl.gov/argoapi/api/v1/resource/chat/"
argo_stream_url: "https://apps-dev.inside.anl.gov/argoapi/api/v1/resource/streamchat/"
argo_embedding_url: "https://apps.inside.anl.gov/argoapi/api/v1/resource/embed/"
user: "your_username" # set during first-time setup
verbose: true # can be changed during setup
num_workers: 5
timeout: 600 # in seconds
```

### Running the Application

To start the application:

```bash
argo-proxy [config_path]
```

- Without arguments: search for `config.yaml` under `~/.config/argoproxy/`, `~/.argoproxy/`, or current directory
- With path: uses specified config file

  ```bash
  argo-proxy /path/to/config.yaml
  ```

### First-Time Setup

When running without an existing config file:

1. The script offers to create `config.yaml` from `config.sample.yaml`
2. Automatically selects a random available port (can be overridden)
3. Prompts for:
   - Your username (sets `user` field)
   - Verbose mode preference (sets `verbose` field)
4. Validates connectivity to configured URLs
5. Shows the generated config in a formatted display for review before proceeding

Example session:

```bash
$ argo-proxy 
No valid configuration found.
Would you like to create it from config.sample.yaml? [Y/n]: 
Creating new configuration...
Use port [52226]? [Y/n/<port>]: 
Enter your username: your_username
Enable verbose mode? [Y/n] 
Set timeout to [600] seconds? [Y/n/<timeout>] 
Created new configuration at: /home/your_username/.config/argoproxy/config.yaml
Using port 52226...
Validating URL connectivity...
Current configuration:
--------------------------------------
{
    "host": "0.0.0.0",
    "port": 52226,
    "user": "your_username",
    "argo_url": "https://apps-dev.inside.anl.gov/argoapi/api/v1/resource/chat/",
    "argo_stream_url": "https://apps-dev.inside.anl.gov/argoapi/api/v1/resource/streamchat/",
    "argo_embedding_url": "https://apps.inside.anl.gov/argoapi/api/v1/resource/embed/",
    "verbose": true,
    "num_workers": 5,
    "timeout": 600
}
--------------------------------------
# ... proxy server starting info display ...
```

### Configuration Options Reference

| Option               | Description                                                  | Default            |
| -------------------- | ------------------------------------------------------------ | ------------------ |
| `host`               | Host address to bind the server to                           | `0.0.0.0`          |
| `port`               | Application port (random available port selected by default) | randomly assigned  |
| `argo_url`           | ARGO chat API URL                                            | Dev URL (for now)  |
| `argo_stream_url`    | ARGO stream API URL                                          | Dev URL (for now)  |
| `argo_embedding_url` | ARGO embedding API URL                                       | Prod URL           |
| `user`               | Your username                                                | (Set during setup) |
| `verbose`            | Debug logging                                                | `true`             |
| `num_workers`        | Worker processes                                             | `5`                |
| `timeout`            | Request timeout (seconds)                                    | `600`              |

### `argo-proxy` Cli Available Options

```bash
$ argo-proxy -h
usage: argo-proxy [-h] [--show] [--host HOST] [--port PORT] [--num-worker NUM_WORKER]
                  [--verbose | --quiet] [--version]
                  [config]

Argo Proxy CLI

positional arguments:
  config                Path to the configuration file

options:
  -h, --help            show this help message and exit
  --show, -s            Show the current configuration during launch
  --host HOST, -H HOST  Host address to bind the server to
  --port PORT, -p PORT  Port number to bind the server to
  --num-worker NUM_WORKER, -n NUM_WORKER
                        Number of worker processes to run
  --verbose, -v         Enable verbose logging, override if `verbose` set False in config
  --quiet, -q           Disable verbose logging, override if `verbose` set True in config
  --version, -V         Show the version and exit.
```

## Usage

### Endpoints

#### OpenAI Compatible

These endpoints convert responses from the ARGO API to be compatible with OpenAI's format:

- **`/v1/chat/completions`**: Converts ARGO chat/completions responses to OpenAI-compatible format.
- **`/v1/completions`**: Legacy API for conversions to OpenAI format.
- **`/v1/embeddings`**: Accesses ARGO Embedding API with response conversion.
- **`/v1/models`**: Lists available models in OpenAI-compatible format.

#### Not OpenAI Compatible

These endpoints interact directly with the ARGO API and do not convert responses to OpenAI's format:

- **`/v1/chat`**: Proxies requests to the ARGO API without conversion.
- **`/v1/status`**: Responds with a simple "hello" from GPT-4o, knowing it is alive.

#### Timeout Override

You can override the default timeout with a `timeout` parameter in your request.

Details of how to make such override in different query flavors: [Timeout Override Examples](timeout_examples.md)

### Models

#### Chat Models

| Original ARGO Model Name | Argo Proxy Name                          |
| ------------------------ | ---------------------------------------- |
| `gpt35`                  | `argo:gpt-3.5-turbo`                     |
| `gpt35large`             | `argo:gpt-3.5-turbo-16k`                 |
| `gpt4`                   | `argo:gpt-4`                             |
| `gpt4large`              | `argo:gpt-4-32k`                         |
| `gpt4turbo`              | `argo:gpt-4-turbo-preview`               |
| `gpt4o`                  | `argo:gpt-4o`                            |
| `gpt4olatest`            | `argo:gpt-4o-latest`                     |
| `gpto1preview`           | `argo:gpt-o1-preview`, `argo:o1-preview` |
| `gpto1mini`              | `argo:gpt-o1-mini` , `argo:o1-mini`      |
| `gpto3mini`              | `argo:gpt-o3-mini` , `argo:o3-mini`      |
| `gpto1`                  | `argo:gpt-o1` , `argo:o1`                |

#### Embedding Models

| Original ARGO Model Name | Argo Proxy Name               |
| ------------------------ | ----------------------------- |
| `ada002`                 | `argo:text-embedding-ada-002` |
| `v3small`                | `argo:text-embedding-3-small` |
| `v3large`                | `argo:text-embedding-3-large` |

### Examples

#### Chat Completion Example

For an example of how to use the `/v1/chat/completions`, /v1/completions`, /v1/chat` endpoint, see the followings:

- [chat_completions_example.py](examples/chat_completions_example.py)
- [chat_completions_example_stream.py](examples/chat_completions_example_stream.py)
- [completions_example.py](examples/completions_example.py)
- [completions_example_stream.py](examples/completions_example_stream.py)
- [chat_example.py](examples/chat_example.py)
- [chat_example_stream.py](examples/chat_example_stream.py)

#### Embedding Example

- [embedding_example.py](examples/embedding_example.py)

#### o1 Chat Example

- [o1_chat_example.py](examples/o1_chat_example.py)

#### OpenAI Client Example

- [openai_o3_chat_example.py](examples/o3_chat_example_pyclient.py)

## Folder Structure

The following is an overview of the project's directory structure:

```
$ tree -I "__pycache__|*.egg-info|dist|dev_scripts|config.yaml"
.
├── config.sample.yaml
├── examples
│   ├── chat_completions_example.py
│   ├── chat_completions_example_stream.py
│   ├── chat_example.py
│   ├── chat_example_stream.py
│   ├── completions_example.py
│   ├── completions_example_stream.py
│   ├── embedding_example.py
│   ├── o1_chat_example.py
│   └── o3_chat_example_pyclient.py
├── LICENSE
├── Makefile
├── pyproject.toml
├── README.md
├── run_app.sh
├── src
│   └── argoproxy
│       ├── app.py
│       ├── chat.py
│       ├── cli.py
│       ├── completions.py
│       ├── config.py
│       ├── constants.py
│       ├── embed.py
│       ├── extras.py
│       ├── __init__.py
│       ├── py.typed
│       └── utils.py
└── timeout_examples.md

4 directories, 27 files
```

## Bug Reports and Contributions

This project was developed in my spare time. Bugs and issues may exist. If you encounter any or have suggestions for improvements, please [open an issue](https://github.com/Oaklight/argo-proxy/issues/new) or [submit a pull request](https://github.com/Oaklight/argo-proxy/compare). Your contributions are highly appreciated!
