Metadata-Version: 2.4
Name: videopython
Version: 0.18.2
Summary: Minimal video generation and processing library.
Project-URL: Homepage, https://videopython.com
Project-URL: Repository, https://github.com/bartwojtowicz/videopython/
Project-URL: Documentation, https://videopython.com
Author-email: Bartosz Wójtowicz <bartoszwojtowicz@outlook.com>, Bartosz Rudnikowicz <bartoszrudnikowicz840@gmail.com>, Piotr Pukisz <piotr.pukisz@gmail.com>
License: Apache-2.0
License-File: LICENSE
Keywords: ai,editing,generation,movie,opencv,python,shorts,video,videopython
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: <3.13,>=3.10
Requires-Dist: numpy>=1.25.2
Requires-Dist: opencv-python>=4.9.0.80
Requires-Dist: pillow>=12.1.1
Requires-Dist: torchcodec>=0.9.1
Requires-Dist: tqdm>=4.66.3
Provides-Extra: ai
Requires-Dist: accelerate>=0.29.2; extra == 'ai'
Requires-Dist: coqui-tts>=0.24.0; extra == 'ai'
Requires-Dist: demucs>=4.0.0; extra == 'ai'
Requires-Dist: diffusers>=0.26.3; extra == 'ai'
Requires-Dist: easyocr>=1.7.0; extra == 'ai'
Requires-Dist: elevenlabs>=1.0.0; extra == 'ai'
Requires-Dist: google-generativeai>=0.8.0; extra == 'ai'
Requires-Dist: hf-transfer>=0.1.9; extra == 'ai'
Requires-Dist: httpx>=0.27.0; extra == 'ai'
Requires-Dist: lumaai>=1.0.0; extra == 'ai'
Requires-Dist: numba>=0.61.0; extra == 'ai'
Requires-Dist: ollama>=0.4.5; extra == 'ai'
Requires-Dist: openai-whisper>=20240930; extra == 'ai'
Requires-Dist: openai>=1.0.0; extra == 'ai'
Requires-Dist: protobuf>=5.29.6; extra == 'ai'
Requires-Dist: replicate>=0.20.0; extra == 'ai'
Requires-Dist: requests>=2.28.0; extra == 'ai'
Requires-Dist: runwayml>=0.10.0; extra == 'ai'
Requires-Dist: scikit-learn>=1.3.0; extra == 'ai'
Requires-Dist: scipy>=1.10.0; extra == 'ai'
Requires-Dist: torch>=2.1.0; extra == 'ai'
Requires-Dist: transformers>=4.38.1; extra == 'ai'
Requires-Dist: transnetv2-pytorch>=1.0.5; extra == 'ai'
Requires-Dist: ultralytics>=8.0.0; extra == 'ai'
Requires-Dist: whisperx>=3.4.2; extra == 'ai'
Provides-Extra: dev
Requires-Dist: mypy>=1.8.0; extra == 'dev'
Requires-Dist: pytest-cov>=6.1.1; extra == 'dev'
Requires-Dist: pytest>=7.4.0; extra == 'dev'
Requires-Dist: ruff>=0.1.14; extra == 'dev'
Requires-Dist: types-pillow>=10.2.0.20240213; extra == 'dev'
Requires-Dist: types-tqdm>=4.66.0.20240106; extra == 'dev'
Description-Content-Type: text/markdown

# videopython

Minimal Python library for video editing, processing, and AI video workflows.
Built primarily for practical editing workflows, with optional AI capabilities layered on top.

Full documentation lives at [videopython.com](https://videopython.com) (guides, examples, and complete API reference).  
Use this README for quick setup and a feature overview.

## Installation

### 1. Install FFmpeg

```bash
# macOS
brew install ffmpeg

# Ubuntu / Debian
sudo apt-get install ffmpeg

# Windows (Chocolatey)
choco install ffmpeg
```

### 2. Install videopython

```bash
# Core video/audio features only
pip install videopython
# or
uv add videopython

# Include AI features
pip install "videopython[ai]"
# or
uv add videopython --extra ai
```

Python support: `>=3.10, <3.13`.

## Quick Start

### Video editing

```python
from videopython import Video
from videopython.base import FadeTransition

intro = Video.from_path("intro.mp4").resize(1080, 1920)
clip = Video.from_path("raw.mp4").cut(10, 25).resize(1080, 1920).resample_fps(30)
final = intro.transition_to(clip, FadeTransition(effect_time_seconds=0.5))
final = final.add_audio_from_file("music.mp3")
final.save("output.mp4")
```

### JSON editing plans (`VideoEdit`)

```python
from videopython.base import VideoEdit

plan = {
    "segments": [
        {
            "source": "raw.mp4",
            "start": 10.0,
            "end": 20.0,
            "transforms": [{"op": "resize", "args": {"height": 1280}}, {"op": "speed_change", "args": {"speed": 1.25}}],
        }
    ],
    "post_effects": [
        {"op": "blur_effect", "args": {"mode": "constant", "iterations": 1}, "apply": {"start": 0.0, "stop": 1.0}}
    ],
}

edit = VideoEdit.from_dict(plan)
edit.validate()  # dry run via VideoMetadata (no frame loading)
final = edit.run()
final.save("output.mp4")
```

Use `post_transforms` for transforms and `post_effects` for effects. `VideoEdit.json_schema()` returns a parser-aligned JSON Schema for plan generation/validation.

### AI generation

```python
from videopython.ai import TextToImage, ImageToVideo, TextToSpeech

image = TextToImage(backend="openai").generate_image("A cinematic mountain sunrise")
video = ImageToVideo(backend="local").generate_video(image=image, fps=24).resize(1080, 1920)
audio = TextToSpeech(backend="openai").generate_audio("Welcome to videopython.")
video.add_audio(audio).save("ai_video.mp4")
```

## Functionality Overview

### `videopython.base` (no AI dependencies)

- Video I/O and metadata: `Video`, `VideoMetadata`, `FrameIterator`
- Editing plans: `VideoEdit`, `SegmentConfig` (JSON/LLM-friendly multi-segment plans with schema generation)
- Transformations: cut by time/frame, resize, crop, FPS resampling, speed change, picture-in-picture
- Clip composition: concatenate, split, transitions (`FadeTransition`, `BlurTransition`, `InstantTransition`)
- Visual effects: blur, zoom, color grading, vignette, Ken Burns, image overlays
- Audio pipeline: load/save audio, overlay/concat, normalize, time-stretch, silence detection, segment classification
- Text/subtitles: transcription data classes and `TranscriptionOverlay`
- Scene detection: histogram-based scene boundaries (`detect`, `detect_streaming`, `detect_parallel`)

Docs:
- [Core API](https://videopython.com/api/index/)
- [Video](https://videopython.com/api/core/video/)
- [Audio](https://videopython.com/api/core/audio/)
- [Editing Plans (`VideoEdit`)](https://videopython.com/api/editing/)
- [Transforms](https://videopython.com/api/transforms/)
- [Transitions](https://videopython.com/api/transitions/)
- [Effects](https://videopython.com/api/effects/)
- [Text & Transcription](https://videopython.com/api/text/)

### `videopython.ai` (install with `[ai]`)

- Generation: `TextToVideo`, `ImageToVideo`, `TextToImage`, `TextToSpeech`, `TextToMusic`
- Understanding:
  - Transcription and captioning: `AudioToText`, `ImageToText`
  - Detection/classification: `ObjectDetector`, `FaceDetector`, `TextDetector`, `ShotTypeClassifier`
  - Motion/action/scene understanding: `CameraMotionDetector`, `MotionAnalyzer`, `ActionRecognizer`, `SemanticSceneDetector`
  - Multi-signal frame analysis: `CombinedFrameAnalyzer`
- AI transforms: `FaceTracker`, `FaceTrackingCrop`, `SplitScreenComposite`, `AutoFramingCrop`
- Dubbing/revoicing: `videopython.ai.dubbing.VideoDubber`
- Object swapping/inpainting: `ObjectSwapper`

Docs:
- [AI Generation](https://videopython.com/api/ai/generation/)
- [AI Understanding](https://videopython.com/api/ai/understanding/)
- [AI Transforms](https://videopython.com/api/ai/transforms/)
- [AI Dubbing](https://videopython.com/api/ai/dubbing/)
- [AI Object Swapping](https://videopython.com/api/ai/swapping/)

## Backends and API Keys

Cloud-enabled features use these environment variables:

- `OPENAI_API_KEY`
- `GOOGLE_API_KEY`
- `ELEVENLABS_API_KEY`
- `RUNWAYML_API_KEY`
- `LUMAAI_API_KEY`
- `REPLICATE_API_TOKEN`

Example:

```bash
export OPENAI_API_KEY="your-key"
export GOOGLE_API_KEY="your-key"
```

Notes:
- Local generation models can require substantial GPU resources.
- Backend/model details by class are documented at [videopython.com](https://videopython.com).

## Examples

- [Social Media Clip](https://videopython.com/examples/social-clip/)
- [AI-Generated Video](https://videopython.com/examples/ai-video/)
- [Auto-Subtitles](https://videopython.com/examples/auto-subtitles/)
- [Processing Large Videos](https://videopython.com/examples/large-videos/)

## Development

See [`DEVELOPMENT.md`](DEVELOPMENT.md) for local setup, testing, and contribution workflow.
