Metadata-Version: 2.4
Name: eval-learn
Version: 0.1.6
Summary: Unlearning Benchmark for Text-to-Image Models
Author: Eval-Learn Team
License: MIT
Project-URL: Homepage, https://github.com/nikhilr2907/eval-learn
Project-URL: Documentation, https://eval-learn.readthedocs.io
Project-URL: Source, https://github.com/nikhilr2907/eval-learn
Project-URL: Bug Tracker, https://github.com/nikhilr2907/eval-learn/issues
Keywords: diffusion,unlearning,evaluation,text-to-image,benchmark,machine-learning
Classifier: Programming Language :: Python :: 3
Classifier: Operating System :: OS Independent
Classifier: Intended Audience :: Science/Research
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.8
Description-Content-Type: text/markdown
Requires-Dist: numpy>=2.4.0
Requires-Dist: scipy>=1.17.0
Requires-Dist: tqdm>=4.66.5
Requires-Dist: requests>=2.28.1
Requires-Dist: safetensors>=0.7.0
Requires-Dist: diffusers>=0.37.0
Requires-Dist: huggingface_hub>=1.8.0
Requires-Dist: Pillow>=12.1.0
Requires-Dist: python-dotenv>=1.2.2
Requires-Dist: torch>=2.11.0
Requires-Dist: transformers>=5.3.0
Requires-Dist: datasets>=4.8.0
Requires-Dist: pyyaml>=5.1
Provides-Extra: asr
Requires-Dist: nudenet>=3.4.2; extra == "asr"
Provides-Extra: fid
Requires-Dist: torchvision>=0.26.0; extra == "fid"
Provides-Extra: coco
Requires-Dist: torchvision>=0.26.0; extra == "coco"
Provides-Extra: dev
Requires-Dist: pytest>=7.0.0; extra == "dev"
Requires-Dist: ruff>=0.0.1; extra == "dev"
Requires-Dist: mypy>=1.0.0; extra == "dev"
Provides-Extra: all
Requires-Dist: eval-learn[asr,coco,dev,fid]; extra == "all"

# eval-learn

A benchmarking framework for evaluating concept-unlearning techniques in text-to-image diffusion models.

Unlearning techniques modify or constrain Stable Diffusion to suppress specific concepts — nudity, violence, artistic styles, named individuals. eval-learn provides a common interface to run, compare, and evaluate these techniques under consistent conditions.

---

## Techniques

| Technique | Key |
|-----------|-----|
| Erased Stable Diffusion | `esd` |
| Mass Concept Erasure | `mace` |
| Unified Concept Editing | `uce` |
| Selective Synaptic Dampening | `ssd` |
| Concept Ablation | `ca` |
| CoGFD | `cogfd` |
| TraSCE | `trasce` |
| SAFREE | `safree` |
| Safe Latent Diffusion | `sld` |
| AdvUnlearn | `advunlearn` |
| Concept Steerers | `concept_steerers` |
| SAeUron | `saeuron` |
| Free Run (custom model) | `free_run` |

## Metrics

| Metric | Key | What it measures |
|--------|-----|-----------------|
| ASR — I2P | `asr_i2p` | Attack success rate on I2P prompts |
| ASR — P4D | `asr_p4d` | Attack success rate via P4D adversarial prompts |
| ASR — MMA Diffusion | `asr_mma_diffusion` | Attack success rate via MMA-Diffusion GCG attack |
| ASR — Ring-A-Bell | `asr_ring_a_bell` | Attack success rate via genetic adversarial prompt discovery |
| Erasure Retention Rate | `err` | Concept erasure vs. unrelated concept retention |
| FID | `fid` | Image quality vs. COCO reference |
| CLIP Score | `clip_score` | Prompt-image alignment |
| UA-IRA | `ua_ira` | Unsafe concept alignment vs. retain concept alignment |
| TIFA | `tifa` | Text-image faithfulness via VQA |

---

## Installation

### 1. Install eval-learn

```bash
pip install eval-learn
```

### 2. Install technique packages

Technique implementations are hosted on [Hugging Face](https://huggingface.co/datasets/Unlearningltd/Packages). Clone the repo once, pull LFS files, then install only what you need:

```bash
git clone https://huggingface.co/datasets/Unlearningltd/Packages
cd Packages
git lfs pull
```

```bash
pip install -e esd/
pip install -e mace/
pip install -e uce/
pip install -e ssd/
pip install -e ca/
pip install -e cogfd/
pip install -e trasce/
pip install -e saeuron/
pip install -e safree/
pip install -e concept-steerers/
pip install -e advunlearn/
```

SLD is built into eval-learn via the `diffusers` library and requires no extra install.

### 3. Install metric packages

From the cloned `Packages` directory (see step 2 above):

```bash
pip install -e p4d/
pip install -e mma_diff/
pip install -e RING_A_BELL/
pip install -e Q16/
```

```bash
# NudeNet (nudity ASR)
pip install "eval-learn[asr]"

# FID / COCO metrics
pip install "eval-learn[fid,coco]"
```

### 4. Hugging Face authentication

Create a `.env` file in the directory you run `eval-learn run` from:

```
HF_TOKEN=your_token_here
```

---

## Quick start

Benchmarks are defined in a JSON or YAML config file:

```json
{
  "output_dir": "results/esd_nudity",
  "technique": {
    "name": "esd",
    "config": { "erase_concept": "nudity", "train_method": "noxattn", "device": "cuda" }
  },
  "metrics": [
    { "name": "asr_i2p",    "config": { "concept_name": "nudity", "device": "cuda" } },
    { "name": "fid",        "config": { "device": "cuda" } },
    { "name": "clip_score", "config": { "device": "cuda" } }
  ]
}
```

Run it:

```bash
eval-learn run --config config.json
```

Results are written to `output_dir` as JSON.

### Useful commands

```bash
eval-learn plugins   # list installed techniques and metrics
eval-learn models    # show the base model each technique targets
```

---

## Examples

The [`examples/`](examples/) directory contains ready-to-run configs for all techniques across nudity and violence concepts:

```
examples/
  nudity/     one config per technique (esd.json, mace.json, ...)
  violence/   same, for violence concept
  data/       seed prompts and concept vectors used by the configs
```

Run all nudity benchmarks in sequence:

```bash
python nudity_unlearning_demo.py
```

Run all violence benchmarks:

```bash
python nudity_unlearning_demo_violence.py
```

---

## Documentation

Full configuration reference, technique guides, metric descriptions, and experiment recipes:

**https://eval-learn.readthedocs.io**

Package on PyPI: **https://pypi.org/project/eval-learn/**

Key pages:

- [Getting started](docs/docs/getting-started.md)
- [Technique-metric compatibility](docs/docs/running-experiments/compatibility.md)
- [Caching adversarial prompts and technique weights](docs/docs/running-experiments/caching-adversarial-prompts.md)

---

## License

MIT
