Metadata-Version: 2.1
Name: smartdownsample
Version: 2.0.4
Summary: Smart image downsampling for image classification datasets
Author: Peter van Lunteren
License: MIT
Project-URL: Homepage, https://github.com/PetervanLunteren/smartdownsample
Project-URL: Repository, https://github.com/PetervanLunteren/smartdownsample
Project-URL: Issues, https://github.com/PetervanLunteren/smartdownsample/issues
Project-URL: Documentation, https://github.com/PetervanLunteren/smartdownsample#readme
Keywords: image,downsampling,camera-trap,machine-learning,computer-vision,diversity,deduplication
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Scientific/Engineering :: Image Processing
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.8
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: numpy>=1.21.0
Requires-Dist: Pillow>=8.0.0
Requires-Dist: torch>=2.0.0
Requires-Dist: torchvision>=0.15.0
Requires-Dist: scikit-learn>=1.0.0
Requires-Dist: tqdm>=4.62.0
Requires-Dist: natsort>=8.0.0
Requires-Dist: matplotlib>=3.5.0
Requires-Dist: plotly>=5.0.0
Provides-Extra: dev
Requires-Dist: pytest>=6.0; extra == "dev"
Requires-Dist: black>=22.0; extra == "dev"
Requires-Dist: flake8>=4.0; extra == "dev"
Requires-Dist: mypy>=0.900; extra == "dev"

# smartdownsample

**Embedding-based diverse downsampling for large image datasets**

`smartdownsample` selects representative subsets from large image collections while preserving visual diversity. It uses DINOv2 embeddings and agglomerative clustering to group visually similar images, then samples across clusters to maximize variety.

Built for image collections that:
1. Contain more images than you need for training, and
2. Have a high level of redundancy (e.g., many near-duplicate or visually similar frames)

In many ML workflows, majority classes can have hundreds of thousands of images. These often need to be reduced for efficiency or class balance, without discarding too much valuable variation. `smartdownsample` offers a practical solution: fast downsampling that keeps diversity, cutting processing time from hours (or days) to minutes.

This approach builds on work by Dante Wasmuht and Peter Bermant at [Conservation X Labs](https://conservationxlabs.com/).

## Installation

```bash
pip install smartdownsample
```

Requires Python >= 3.8. GPU recommended but not required (falls back to CPU).

Note: `pip install smartdownsample` installs CPU-only PyTorch. For GPU support, install the CUDA version of PyTorch first ([pytorch.org](https://pytorch.org/get-started/locally/)).

## Usage

```python
from smartdownsample import sample_diverse

selected = sample_diverse(
    image_paths=my_image_list,
    target_count=50000
)
```

## Parameters

| Parameter | Default | Description |
|-----------|---------|-------------|
| `image_paths` | Required | List of image file paths (str or Path objects) |
| `target_count` | Required | Exact number of images to select |
| `distance_threshold` | `0.5` | Cosine distance threshold for clustering. Lower = more clusters (stricter). Higher = fewer clusters (more lenient). |
| `n_workers` | `4` | Number of parallel workers for image loading |
| `show_progress` | `True` | Display progress bars during processing |
| `show_summary` | `True` | Print cluster statistics and distribution summary |
| `save_distribution` | `None` | Path to save distribution chart as PNG (creates directories if needed) |
| `save_thumbnails` | `None` | Path to save thumbnail grids as PNG (creates directories if needed) |
| `image_loading_errors` | `"raise"` | How to handle image loading errors: `"raise"` (fail immediately) or `"skip"` (continue with remaining images) |
| `return_indices` | `False` | Return 0-based indices instead of paths (refers to original input list order) |

## How it works

The algorithm has four steps:

1. **Embedding extraction**
   Each image is passed through DINOv2 ViT-S/14 to produce a 384-dimensional embedding vector that captures semantic visual features (subjects, backgrounds, composition, lighting). Embeddings are L2-normalized. The model is loaded once and cached for subsequent calls.

2. **Clustering**
   Images are grouped using agglomerative clustering (cosine distance, average linkage) with a fixed distance threshold. The number of clusters reflects the natural visual structure of the data, not the selection budget. This means larger clusters (common visual patterns) get proportionally more images in the selection, while small clusters (rare/unique images) are still guaranteed representation.

3. **Divide-and-conquer scaling** (for large datasets)

   Clustering all images at once requires comparing every pair. For 10,000 images that's 100 million comparisons, and for 1,000,000 images that's 1 trillion. Instead, for datasets larger than 2,000 images, clustering is done in stages:

   1. Shuffle the images randomly and split them into groups of ~2,000.
   2. Cluster each group independently (much smaller distance matrices).
   3. From each cluster within each group, pick the 5 most central images as representatives.
   4. Re-cluster all the representatives together. This merges clusters that were separated by the random split, e.g., visually similar images that ended up in different groups now get reunited.
   5. Every image inherits the final cluster ID of its representative.

   The random shuffle ensures each group is a representative mix. The re-clustering stitches it back together. The result is roughly the same as clustering everything at once, but at a fraction of the cost.

   If the representative set is still too large after several rounds (very large datasets, 500K+), the final merging step uses MiniBatchKMeans instead of agglomerative clustering. KMeans scales linearly because it doesn't build a pairwise distance matrix. The earlier rounds still use full agglomerative clustering where the real grouping happens, so the impact on quality is minimal.

4. **Cluster-aware sampling**
   - Phase 1 (diversity): Take the most central image (medoid) from each cluster, guaranteeing every visual group is represented.
   - Phase 2 (proportional fill): Distribute the remaining budget across clusters proportionally to their size using largest-remainder allocation. This ensures fair representation. A cluster with twice as many images gets twice as many selections, without rounding bias toward the largest clusters. Within each cluster, images are selected by centrality rank (most representative first).

5. **Save distribution chart** (optional)
   - Vertical bar chart of kept vs. excluded images per cluster

6. **Save thumbnail grids** (optional)
   - 5x5 grids from each cluster, for quick visual review

## Performance

Approximate times on an NVIDIA RTX 3080 Ti.

| Dataset size | Embedding time (GPU) | Clustering | Total |
|-------------|---------------------|------------|-------|
| 1,000 images | ~1s | instant | ~2s |
| 10,000 images | ~15s | ~1s | ~20s |
| 100,000 images | ~2.5 min | ~10s | ~3 min |
| 1,000,000 images | ~25 min | ~2 min | ~30 min |

## License

MIT License, see [LICENSE file](https://github.com/PetervanLunteren/smartdownsample/blob/main/LICENSE).

## TODO
- [ ] The thumbnail plots can get very large (several dozen MB). Perhaps we whould find a solution where we just show a max of 10*10 clusters. Its just for visual check anyways, so no need to show all clusters.
- [ ] The distribution plots x axis ticks overlap. Remove those, they are redundant. Its about the main reg/green idea anyways. 
- [ ] the in bucket assignment, how does it choose which ones to keep and which ones to exclude? Investigate. Can we improve this? 
