作者
Maura Pintor, Fabio Roli, Wieland Brendel, Battista Biggio
发表日期
2021/12/6
期刊
Advances in Neural Information Processing Systems
卷号
34
页码范围
20052-20062
简介
Evaluating adversarial robustness amounts to finding the minimum perturbation needed to have an input sample misclassified. The inherent complexity of the underlying optimization requires current gradient-based attacks to be carefully tuned, initialized, and possibly executed for many computationally-demanding iterations, even if specialized to a given perturbation model. In this work, we overcome these limitations by proposing a fast minimum-norm (FMN) attack that works with different -norm perturbation models (), is robust to hyperparameter choices, does not require adversarial starting points, and converges within few lightweight steps. It works by iteratively finding the sample misclassified with maximum confidence within an -norm constraint of size , while adapting to minimize the distance of the current sample to the decision boundary. Extensive experiments show that FMN significantly outperforms existing , , and -norm attacks in terms of perturbation size, convergence speed and computation time, while reporting comparable performances with state-of-the-art -norm attacks. Our open-source code is available at: https://github. com/pralab/Fast-Minimum-Norm-FMN-Attack.
引用总数
学术搜索中的文章
M Pintor, F Roli, W Brendel, B Biggio - Advances in Neural Information Processing Systems, 2021