Metadata-Version: 2.4
Name: attackbenchlib
Version: 1.0.9
Summary: A Python package for benchmarking adversarial attacks and defenses.
Author-email: Antonio Cinà <antonio.cina@unige.it>, Riccardo Trebiani <richitrebbia@gmail.com>
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Operating System :: OS Independent
Requires-Python: <3.13,>=3.9
Description-Content-Type: text/markdown
Requires-Dist: torch>=2.4
Requires-Dist: torchvision>=0.19
Requires-Dist: matplotlib>=3.5.1
Requires-Dist: pandas>=1.4.0
Requires-Dist: scipy>=1.8.0
Requires-Dist: numpy>=1.21.0
Requires-Dist: tqdm>=4.56.2
Requires-Dist: wget>=3.2
Requires-Dist: wandb>=0.15.0
Provides-Extra: attacks
Requires-Dist: adversarial-robustness-toolbox; extra == "attacks"
Requires-Dist: foolbox; extra == "attacks"
Requires-Dist: torchattacks; extra == "attacks"
Requires-Dist: cleverhans==4.0.0; extra == "attacks"
Provides-Extra: deeprobust
Requires-Dist: deeprobust; extra == "deeprobust"
Requires-Dist: scipy<1.8.0,>=1.5.0; extra == "deeprobust"
Provides-Extra: models
Requires-Dist: pillow>=8.0.0; extra == "models"
Requires-Dist: requests>=2.25.0; extra == "models"
Requires-Dist: timm>=0.9.0; extra == "models"
Requires-Dist: transformers>=4.20.0; extra == "models"
Requires-Dist: robustbench>=1.1; extra == "models"
Requires-Dist: pyautoattack>=0.2.0; extra == "models"
Requires-Dist: pretrainedmodels>=0.7.4; extra == "models"
Provides-Extra: metrics
Requires-Dist: scikit-learn>=1.0.0; extra == "metrics"
Requires-Dist: seaborn>=0.11.0; extra == "metrics"
Requires-Dist: plotly>=5.0.0; extra == "metrics"
Requires-Dist: tabulate>=0.9.0; extra == "metrics"
Provides-Extra: all
Requires-Dist: adversarial-robustness-toolbox; extra == "all"
Requires-Dist: foolbox; extra == "all"
Requires-Dist: torchattacks; extra == "all"
Requires-Dist: cleverhans==4.0.0; extra == "all"
Requires-Dist: robustbench>=1.1; extra == "all"
Requires-Dist: pyautoattack>=0.2.0; extra == "all"
Requires-Dist: timm>=0.9.0; extra == "all"
Requires-Dist: transformers>=4.20.0; extra == "all"
Requires-Dist: pretrainedmodels>=0.7.4; extra == "all"
Requires-Dist: scikit-learn>=1.0.0; extra == "all"
Requires-Dist: seaborn>=0.11.0; extra == "all"
Requires-Dist: plotly>=5.0.0; extra == "all"
Requires-Dist: tabulate>=0.9.0; extra == "all"
Requires-Dist: pillow>=8.0.0; extra == "all"
Requires-Dist: requests>=2.25.0; extra == "all"
Provides-Extra: dev
Requires-Dist: pytest>=6.0; extra == "dev"
Requires-Dist: black>=22.0; extra == "dev"
Requires-Dist: isort>=5.0; extra == "dev"
Requires-Dist: flake8>=4.0; extra == "dev"
Provides-Extra: docs
Requires-Dist: sphinx>=7.0.0; extra == "docs"
Requires-Dist: sphinx-rtd-theme>=2.0.0; extra == "docs"
Requires-Dist: sphinx-autodoc-typehints>=1.19.0; extra == "docs"
Requires-Dist: myst-parser>=2.0.0; extra == "docs"

# **AttackBenchLib**: Evaluating Gradient-based Attacks for Adversarial Examples

Riccardo Trebiani, Antonio Emanuele Cinà, Jérôme Rony, Maura Pintor, Luca Demetrio, Ambra Demontis, Battista Biggio, Ismail Ben Ayed and Fabio Roli

**Leaderboard**: [https://attackbench.github.io/](https://attackbench.github.io/)

**Paper:** [https://arxiv.org/pdf/2404.19460](https://arxiv.org/pdf/2404.19460)

**Tutorial Notebook:** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1rzzLRjMovcns25qOeEXt15R3L2Md_Pst?usp=sharing)
## How it works
AttackBenchLib is a library that implements the framework described in the AttackBench paper in a new modular, user-friendly way in order to make multiple workflows and kinds of analysis possible through the use of a single library. 
The <code>AttackBench</code> framework aims to fairly compare gradient-based attacks based on their security evaluation curves. To this end, we derive a process involving five distinct stages, as depicted below.
  - In stage (1), we construct a list of diverse non-robust and robust models to assess the attacks' impact on various settings, thus testing their adaptability to diverse defensive strategies. 
  - In stage (2), we define an environment for testing gradient-based attacks under a systematic and reproducible protocol. 
        This step provides common ground with shared assumptions, advantages, and limitations. 
        We then run the attacks against the selected models individually and collect the performance metrics of interest in our analysis, which are perturbation size, execution time, and query usage. 
  - In stage (3), we gather all the previously-obtained results, comparing  attacks with the novel <code>local optimality</code> metric. 
  - Finally, in stage (4), we aggregate the optimality results from all considered models, and in stage (5) we rank the attacks based on their average optimality, namely <code>global optimality</code>. 
  

<p align="center"><img src="https://attackbench.github.io/assets/AtkBench.svg" width="1300"></p>


## Currently implemented

| Attack       | Original | Advertorch | Adv_lib | ART | CleverHans | DeepRobust | Foolbox | Torchattacks |
|--------------|:--------:|:----------:|:-------:|:---:|:----------:|:----------:|:-------:|:------------:|
| DDN          |    ☒     |            |    ✓    |  ☒  |     ☒      |     ☒      |    ✓    |      ☒       |
| ALMA         |    ☒     |     ☒      |    ✓    |  ☒  |     ☒      |     ☒      |    ☒    |      ☒       |
| FMN          |    ✓     |     ☒      |    ✓    |  ☒  |     ☒      |     ☒      |    ✓    |      ☒       |
| PGD          |    ☒     |            |    ✓    |  ✓  |            |     ✓      |         |      ✓       |
| JSMA         |    ☒     |            |    ☒    |  ✓  |     ☒      |     ☒      |    ☒    |      ☒       |
| CW-L2        |    ☒     |            |    ✓    |  ✓  |            |     ~      |    ✓    |      ✓       |
| CW-LINF      |    ☒     |     ☒      |    ✓    |  ✓  |     ☒      |     ☒      |    ☒    |      ☒       |
| FGSM         |    ☒     |            |    ☒    |  ✓  |            |            |         |      ✓       |
| BB           |    ☒     |     ☒      |    ☒    |  ✓  |     ☒      |     ☒      |    ✓    |      ☒       |
| DF           |    ✓     |     ☒      |    ☒    |  ✓  |     ☒      |     ~      |    ✓    |      ✓       |
| SuperDF      |    ✓     |     ☒      |    ☒    |  ☒  |     ☒      |     ☒      |    ☒    |      ☒       |
| APGD         |    ✓     |     ☒      |    ✓    |  ✓  |     ☒      |     ☒      |    ☒    |      ✓       |
| BIM          |    ☒     |            |    ☒    |  ✓  |            |     ☒      |         |      ☒       |
| EAD          |    ☒     |            |    ☒    |  ✓  |     ☒      |     ☒      |    ✓    |      ☒       |
| PDGD         |    ☒     |     ☒      |    ✓    |  ☒  |     ☒      |     ☒      |    ☒    |      ☒       |
| PDPGD        |    ☒     |     ☒      |    ✓    |  ☒  |     ☒      |     ☒      |    ☒    |      ☒       |
| TR           |    ✓     |     ☒      |    ✓    |  ☒  |     ☒      |     ☒      |    ☒    |      ☒       |
| FAB          |    ✓     |            |    ✓    |  ☒  |     ☒      |     ☒      |    ☒    |      ✓       |


Legend: 
- _empty_ : not implemented yet 
- ☒ : not available
- ✓ : implemented
- ~ : not functional yet



## Requirements and Installation

- Python >= 3.9, < 3.13
- PyTorch >= 2.4
- TorchVision >= 0.19
- CUDA compatible GPU (recommended)

### Install from PyPI

```bash
pip install attackbenchlib
```

### Optional dependencies

```bash
# Attack library wrappers (ART, Foolbox, Torchattacks, CleverHans)
pip install "attackbenchlib[attacks]"

# Model loading utilities (RobustBench, timm, transformers)
pip install "attackbenchlib[models]"

# Analysis and visualization tools (scikit-learn, seaborn, plotly)
pip install "attackbenchlib[metrics]"

# Everything (attacks + models + metrics)
pip install "attackbenchlib[all]"
```

> **Note on `autoattack`:** RobustBench depends on `autoattack`. If you encounter import errors
> related to autoattack after installing `attackbenchlib[models]`, install it manually from GitHub:
> ```bash
> pip install git+https://github.com/fra31/auto-attack
> ```

> **Note on `adv-lib`:** The Adversarial Library (`adv-lib`) is not available on PyPI.
> If you need adv-lib attacks, install it manually:
> ```bash
> pip install git+https://github.com/jeromerony/adversarial-library
> ```

> **Note on `deeprobust`:** Requires `scipy<1.8.0` and only works on Python 3.9:
> `pip install "attackbenchlib[deeprobust]"`

### Google Colab

On Google Colab, install with all dependencies:

```python
!pip install "attackbenchlib[models,attacks]" -q
!pip install git+https://github.com/fra31/auto-attack -q  # required for RobustBench
```

> You may see red dependency conflict warnings during installation. These are caused by
> RobustBench's strict dependency pins (e.g., `timm==1.0.9`) conflicting with Colab's
> pre-installed packages. They are harmless warnings — the library works correctly.

### Install from source (development)

```bash
git clone https://github.com/attackbench/AttackBenchLib.git
cd AttackBenchLib
pip install -e ".[dev]"
```


## Usage

```python
import torch
import attackbench
from attackbench.attacks import apgd

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

# Load model and dataset (requires attackbenchlib[models])
model = attackbench.load_model('Standard', dataset='cifar10', threat_model='Linf')
model.to(device)

dataset = attackbench.get_loader(dataset='cifar10', batch_size=128, num_samples=1000)

# Run attack
results = attackbench.run_attack(
    model=model,
    dataset=dataset,
    attack=apgd,
    threat_model='linf',
    device=device
)

# Analyze results (requires attackbenchlib[metrics])
stats = attackbench.get_stats(results, 'linf')
print(f"ASR: {stats['ASR']*100:.1f}%")
```

Preconfigured attacks available out of the box: `pgd`, `fgsm`, `apgd`, `fab`, `fmn`, `deepfool`, `superdeepfool`, `trust_region`.

To use attacks from external libraries (requires `attackbenchlib[attacks]`):

```python
# List available attacks
attacks = attackbench.list_attacks(threat_model='linf')

# Load a specific library attack
art_pgd = attackbench.get_attack(lib='art', attack='pgd', threat_model='linf')
results = attackbench.run_attack(model=model, dataset=dataset, attack=art_pgd, threat_model='linf', device=device)
```



## Attack format

The wrappers for all the implementations (including libraries) must have the following format:

- inputs:
    - `model`: `nn.Module` taking inputs in the [0, 1] range and returning logits in $\mathbb{R}^K$
    - `inputs`: `FloatTensor` representing the input samples in the [0, 1] range
    - `labels`: `LongTensor` representing the labels of the samples
    - `targets`: `LongTensor` or `None` representing the targets associated to each samples
    - `targeted`: `bool` flag indicating if a targeted attack should be performed
- output:
    - `adv_inputs`: `FloatTensor` representing the perturbed inputs in the [0, 1] range


## Citation

If you use the **AttackBench** leaderboards or implementation, then consider citing our [paper]():

```bibtex
@inproceedings{cina2025attackbench,
  title={Attackbench: Evaluating gradient-based attacks for adversarial examples},
  author={Cin{\`a}, Antonio Emanuele and Rony, J{\'e}r{\^o}me and Pintor, Maura and Demetrio, Luca and Demontis, Ambra and Biggio, Battista and Ayed, Ismail Ben and Roli, Fabio},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={39},
  number={3},
  pages={2600--2608},
  year={2025},
  DOI={10.1609/aaai.v39i3.32263}
}
```

## Contact 
Feel free to contact us about anything related to **`AttackBench`** by creating an issue, a pull request or 
by email at `antonio.cina@unige.it`.

## Acknowledgements
AttackBench has been partially developed with the support of European Union’s [ELSA – European Lighthouse on Secure and Safe AI](https://elsa-ai.eu), Horizon Europe, grant agreement No. 101070617, and [Sec4AI4Sec - Cybersecurity for AI-Augmented Systems](https://www.sec4ai4sec-project.eu), Horizon Europe, grant agreement No. 101120393.

<img src="_static/assets/logos/sec4AI4sec.png" alt="sec4ai4sec" style="width:70px;"/> &nbsp;&nbsp; 
<img src="_static/assets/logos/elsa.jpg" alt="elsa" style="width:70px;"/> &nbsp;&nbsp; 
<img src="_static/assets/logos/FundedbytheEU.png" alt="europe" style="width:240px;"/>
