Metadata-Version: 2.4
Name: x4d-devkit
Version: 0.4.0
Summary: X-4D dataset format SDK — load, validate, evaluate, and convert autonomous driving datasets
Author: windzu
License-Expression: Apache-2.0
Project-URL: Homepage, https://github.com/windzu/x4d-devkit
Project-URL: Repository, https://github.com/windzu/x4d-devkit
Project-URL: Issues, https://github.com/windzu/x4d-devkit/issues
Keywords: autonomous-driving,dataset,evaluation,annotation,lidar,camera
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: numpy>=1.24
Provides-Extra: converters
Requires-Dist: nuscenes-devkit>=1.1.0; extra == "converters"
Provides-Extra: client
Requires-Dist: httpx>=0.27; extra == "client"
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Requires-Dist: pytest-cov; extra == "dev"
Requires-Dist: httpx>=0.27; extra == "dev"
Dynamic: license-file

# x4d-devkit

X-4D dataset format SDK for autonomous driving — load, validate, evaluate, and convert datasets.

## Installation

```bash
pip install x4d-devkit
```

With optional dependencies:

```bash
# NuScenes format converter
pip install x4d-devkit[converters]

# Platform API client
pip install x4d-devkit[client]
```

## Quick Start

### Load a clip

```python
from x4d_devkit import ClipLoader

loader = ClipLoader("/path/to/clip")
print(loader.meta)

for sample in loader.samples:
    for sd in loader.sample_data_for_sample(sample.token):
        print(sd.channel, sd.file_path)
```

### Coordinate frame transforms

Point clouds and annotations can be loaded in different coordinate frames:

```python
loader = ClipLoader("/path/to/clip")
sd = loader.sample_data_for_channel("LIDAR_TOP")[0]

# Load point cloud in different frames
pts_sensor = loader.load_point_cloud(sd, frame="sensor")  # raw (default)
pts_ego = loader.load_point_cloud(sd, frame="ego")        # sensor → ego
pts_world = loader.load_point_cloud(sd, frame="world")    # sensor → world

# Get annotations in ego frame (for training)
anns_ego = loader.annotations_for_sample(sample.token, frame="ego")

# Get the transform matrix directly
T = loader.get_transform(sd, from_frame="sensor", to_frame="world")
pts_world = T.apply(pts_sensor[:, :3])  # or use T.as_matrix for 4x4
```

### Validate a clip

```bash
x4d validate /path/to/clip
```

```python
from x4d_devkit import validate_clip

report = validate_clip("/path/to/clip")
print(report)
```

### Detection evaluation

```python
from x4d_devkit import DetectionEval, DetectionConfig

config = DetectionConfig(
    class_names=["car", "pedestrian", "bicycle"],
    dist_thresholds=[0.5, 1.0, 2.0, 4.0],
)
evaluator = DetectionEval(config, gt_clips=[...], pred_clips=[...])
result = evaluator.evaluate()
print(f"mAP: {result.mAP:.3f}, NDS: {result.NDS:.3f}")
```

### Convert from NuScenes

```python
from x4d_devkit.converters import NuScenesConverter

converter = NuScenesConverter("/path/to/nuscenes")
converter.convert_scene("scene-0001", output_dir="/path/to/output")
```

## Modules

| Module | Description |
|--------|-------------|
| `core` | Data models, token generation, coordinate transforms, clip loader |
| `eval` | Detection evaluation (mAP, TP metrics, NDS) |
| `converters` | Format converters (NuScenes → X4D) |
| `validation` | Clip structure and data validation |
| `client` | X-4D platform API client |

## License

Apache License 2.0
