Metadata-Version: 2.4
Name: aibridgecore
Version: 1.5.4
Summary: Bridge for LLM"s
Home-page: https://github.com/23ventures/aibridge-core
Author: Ashish Tilekar
Author-email: developer.tools@23v.co
License: MIT
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: PyPy
Requires-Python: >=3.9.0
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: openai<=1.82.1
Requires-Dist: SQLAlchemy>=2.0.19
Requires-Dist: redis>=4.6.0
Requires-Dist: PyYAML>=6.0.1
Requires-Dist: Jinja2>=3.1.2
Requires-Dist: pymongo>=4.4.1
Requires-Dist: sqlparse>=0.4.4
Requires-Dist: jsonschema>=4.18.4
Requires-Dist: Pillow>=10.0.0
Requires-Dist: google-genai>=1.2.0
Requires-Dist: cohere>=5.13.11
Requires-Dist: ai21>=2.13.0
Requires-Dist: xmltodict>=0.13.0
Requires-Dist: anthropic>=0.45.2
Requires-Dist: ollama<=1.2.2
Dynamic: author
Dynamic: author-email
Dynamic: classifier
Dynamic: description
Dynamic: description-content-type
Dynamic: home-page
Dynamic: license
Dynamic: license-file
Dynamic: requires-dist
Dynamic: requires-python
Dynamic: summary


# aibridgecore

`aibridgecore` is a Python SDK for working with multiple AI providers through a consistent set of text-generation utilities, prompt management, reusable variables, structured outputs, queue-backed execution, and provider-specific image and video modules.

## Overview

- Multi-provider text generation with a shared high-level pattern
- Structured outputs for JSON, CSV, and XML workflows
- Stored prompts and reusable variables backed by SQL or MongoDB
- Redis-based queue support for asynchronous processing
- Provider-specific image generation APIs
- Provider-specific video generation APIs

## Installation

```bash
pip install aibridgecore
```

Python 3.9 or later is required.

## Configuration

Set the config file path with `AIBRIDGE_CONFIG`.

```bash
export AIBRIDGE_CONFIG=/absolute/path/to/aibridge_config.yaml
```

Minimal configuration example:

```yaml
open_ai:
  equal:
    - YOUR_OPENAI_API_KEY

database: sql
message_queue: redis
redis_host: localhost
redis_port: 6379
group_name: my_consumer_group
stream_name: my_stream
no_of_threads: 1
```

You can also configure the SDK programmatically:

```python
from aibridgecore import SetConfig

SetConfig.set_api_key(
    ai_service="open_ai",
    key="YOUR_OPENAI_API_KEY",
    priority="equal",
)

SetConfig.set_db_confonfig(
    database="sql",
    database_name=None,
    database_uri=None,
)

SetConfig.redis_config(
    redis_host="localhost",
    redis_port=6379,
    group_name="my_consumer_group",
    stream_name="my_stream",
    no_of_threads=1,
)
```

Supported `ai_service` keys:

- `open_ai`
- `stable_diffusion`
- `cohere_api`
- `ai21_api`
- `gemini_ai`
- `anthropic`
- `grok`
- `deepseek`
- `mistral`
- `alibaba`
- `kimi`

## Text Generation

Primary text providers exported from `aibridgecore`:

- `OpenAIService`
- `GeminiAIService`
- `AnthropicService`
- `CohereApi`
- `AI21labsText`
- `OllamaService`
- `GrokService`
- `DeepseekService`
- `MistralService`
- `AlibabaService`
- `KimiService`

Basic generation example:

```python
import json

from aibridgecore import OpenAIService

schema = json.dumps(
    {
        "summary": ["short summary bullet"],
        "keywords": ["keyword"],
    }
)

response = OpenAIService.generate(
    prompts=["Summarize {{topic}} for an engineering update."],
    prompt_data=[{"topic": "queue-backed AI processing"}],
    output_format=["json"],
    format_strcture=[schema],
    model="gpt-3.5-turbo",
    max_tokens=800,
    temperature=0.3,
    context=[{"role": "system", "context": "Be concise and factual."}],
)

print(response["items"]["response"][0]["data"][0])
```

Streaming is also available on the text provider classes that expose `generate_stream(...)`:

```python
from aibridgecore import OpenAIService

stream = OpenAIService.generate_stream(
    prompts=["Write a short release note for this SDK update."],
    model="gpt-3.5-turbo",
    context=[{"role": "system", "context": "Keep the tone professional."}],
)

for chunk in stream:
    print(chunk)
```

Structured output notes:

- `output_format` accepts `json`, `csv`, or `xml`
- `format_strcture` should match the expected structure for each prompt
- `context` entries use `role` and `context`

## Prompt and Variable Management

Use prompt templates when the shape of a prompt is reusable, and use variables when part of that prompt should come from a named stored dataset.

- `prompt_data` injects request-specific values directly into a prompt template
- `variables` maps template placeholders to previously saved variable keys

Example:

```python
from aibridgecore import PromptInsertion, VariableInsertion

saved_variable = VariableInsertion.save_variables(
    var_key="release_tones",
    var_value=["clear", "professional", "direct"],
)

saved_prompt = PromptInsertion.save_prompt(
    name="release_summary",
    prompt="Write a {{tone}} summary about {{topic}}.",
    prompt_data={"topic": "this release"},
    variables={"tone": "release_tones"},
)

prompt_record = PromptInsertion.get_prompt(id=saved_prompt["id"])
all_prompts = PromptInsertion.get_all_prompt(page=1)

variable_record = VariableInsertion.get_variable(id=saved_variable["id"])
all_variables = VariableInsertion.get_all_variable(page=1)
```

Common operations:

```python
from aibridgecore import PromptInsertion, VariableInsertion

PromptInsertion.update_prompt(
    id="PROMPT_ID",
    name="updated_release_summary",
    prompt_data={"topic": "the latest SDK release"},
    variables={"tone": "release_tones"},
)

VariableInsertion.update_variables(
    id="VARIABLE_ID",
    var_key="release_tones",
    var_value=["clear", "concise", "technical"],
)
```

## Message Queue Support

Queue-backed execution is available through Redis.

When `message_queue=True`, generation returns a response id instead of the final model output:

```python
from aibridgecore import FetchAIResponse, MessageQ, OpenAIService

MessageQ.mq_deque()

queued = OpenAIService.generate(
    prompts=["Generate a short deployment checklist."],
    model="gpt-3.5-turbo",
    message_queue=True,
)

response_id = queued["response_id"]
result = FetchAIResponse.get_response(id=response_id)
print(result)
```

## Image Generation

Image APIs are currently imported from `aibridgecore.image.*`, not from the top-level package.

Available image providers:

- `aibridgecore.image.providers.openai.OpenAIImageProvider`
- `aibridgecore.image.providers.stability.StabilityImageProvider`
- `aibridgecore.image.providers.google_imagen.GoogleImagenProvider`
- `aibridgecore.image.providers.alibaba_wan_image.AlibabaWanImageProvider`

The image request contract supports:

- `text2img`
- `edit`
- `img2img`

Mode support depends on the provider and model you use.

Example:

```python
from aibridgecore.image.contracts import ImageGenerationRequest, ImageMode
from aibridgecore.image.providers.openai import OpenAIImageProvider

provider = OpenAIImageProvider(api_key="YOUR_OPENAI_API_KEY")

request = ImageGenerationRequest(
    prompts=["A clean product render on a studio background"],
    model="gpt-image-1",
    n=1,
    size="1024x1024",
    mode=ImageMode.TEXT_TO_IMAGE,
)

response = provider.generate(request)
artifact = response.results[0].images[0]

with open("generated_image.png", "wb") as file:
    file.write(artifact.content)
```

For edit and image-to-image flows, pass `images=[...]` and optionally `masks=[...]` in the request.

## Video Generation

Video APIs are currently imported from `aibridgecore.video.*`, not from the top-level package.

Available video providers:

- `aibridgecore.video.providers.openai_sora.OpenAISoraProvider`
- `aibridgecore.video.providers.google_veo.GoogleVeoProvider`
- `aibridgecore.video.providers.alibaba_wan.AlibabaWanProvider`
- `aibridgecore.video.providers.stability.StabilityVideoProvider`
- `aibridgecore.video.providers.luma.LumaVideoProvider`

Video generation is asynchronous. You start a job, store the provider job id, and poll for status until the result is ready.

Current request modes:

- `text2video`
- `img2video`
- `video2video` is planned but not enabled yet

Example:

```python
from aibridgecore.video.contracts import VideoGenerationRequest, VideoMode
from aibridgecore.video.providers.openai_sora import OpenAISoraProvider

provider = OpenAISoraProvider(api_key="YOUR_OPENAI_API_KEY")

request = VideoGenerationRequest(
    model="YOUR_VIDEO_MODEL",
    prompt="A slow cinematic drone shot over a rainforest canopy at sunrise",
    duration_seconds=5,
    aspect_ratio="16:9",
    mode=VideoMode.TEXT_TO_VIDEO.value,
)

job = provider.start_generation(request)
status = provider.check_status(job.provider_job_id)

print(job)
print(status)
```

For image-to-video workflows, set `mode=VideoMode.IMG_TO_VIDEO.value` and provide `images=[...]` in the request.
