Metadata-Version: 2.4
Name: dvgateway
Version: 1.2.6
Summary: Python SDK for DVGateway — real-time voice AI integration
Project-URL: Homepage, https://github.com/OLSSOO-Inc/AI-Ready-Real-Time-Voice-Media-Gateway
Project-URL: Documentation, https://github.com/OLSSOO-Inc/AI-Ready-Real-Time-Voice-Media-Gateway/tree/main/docs
Author-email: "OLSSOO Inc." <dev@olssoo.com>
License-Expression: MIT
Keywords: ai,dvgateway,llm,rtp,sip,stt,tts,voice,webrtc
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Communications :: Telephony
Classifier: Topic :: Multimedia :: Sound/Audio :: Speech
Classifier: Typing :: Typed
Requires-Python: >=3.10
Requires-Dist: aiohttp>=3.9.0
Requires-Dist: websockets>=12.0
Provides-Extra: adapters
Requires-Dist: anthropic>=0.39.0; extra == 'adapters'
Requires-Dist: openai>=1.50.0; extra == 'adapters'
Provides-Extra: dev
Requires-Dist: mypy>=1.8; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.23; extra == 'dev'
Requires-Dist: pytest-cov>=4.0; extra == 'dev'
Requires-Dist: pytest>=8.0; extra == 'dev'
Requires-Dist: ruff>=0.3.0; extra == 'dev'
Description-Content-Type: text/markdown

# dvgateway

DVGateway Python SDK — AI 음성 서비스를 실시간 전화 통화에 연결합니다.

## 설치

```bash
# 기본 SDK
pip install dvgateway

# AI 어댑터 포함 (Anthropic, OpenAI)
pip install dvgateway[adapters]
```

## 빠른 시작

```python
import asyncio
import os
from dvgateway import DVGatewayClient
from dvgateway.adapters.stt import DeepgramAdapter
from dvgateway.adapters.llm import AnthropicAdapter
from dvgateway.adapters.tts import ElevenLabsAdapter

async def main():
    gw = DVGatewayClient(
        base_url="http://localhost:8080",
        auth={"type": "apiKey", "api_key": os.environ["DV_API_KEY"]},
    )

    await (
        gw.pipeline()
        .stt(DeepgramAdapter(api_key=os.environ["DEEPGRAM_API_KEY"], language="ko"))
        .llm(AnthropicAdapter(api_key=os.environ["ANTHROPIC_API_KEY"], model="claude-sonnet-4-6"))
        .tts(ElevenLabsAdapter(api_key=os.environ["ELEVENLABS_API_KEY"]))
        .start()
    )

asyncio.run(main())
```

## Comfort Noise — AI 처리 중 Dead Air 방지

게이트웨이에서 `GW_COMFORT_NOISE_ENABLED=true`를 설정하면,
AI 처리 중 무음 구간에 배경 소음이 자동으로 주입됩니다.

**파이프라인 빌더 사용 시 자동 동작 (별도 코드 불필요):**

```python
# thinking:start/stop 시그널이 자동 전송됩니다
await (
    gw.pipeline()
    .stt(DeepgramAdapter(api_key="...", language="ko"))
    .llm(AnthropicAdapter(api_key="...", model="claude-sonnet-4-6"))
    .tts(ElevenLabsAdapter(api_key="..."))
    .start()
)
```

**수동 제어:**

```python
# WebSocket 시그널 (저레이턴시)
audio_stream = gw.stream_audio(linked_id)
await audio_stream.send_thinking_start()  # 배경 소음 시작
# ... AI 처리 ...
await audio_stream.send_thinking_stop()   # 배경 소음 중단

# REST API
await gw.start_thinking(linked_id)
await gw.stop_thinking(linked_id)
```

## 요구사항

- Python 3.10+
- aiohttp >= 3.9.0
- websockets >= 12.0

## 라이선스

MIT
