Robustness via curvature regularization, and vice versa

SM Moosavi-Dezfooli, A Fawzi… - Proceedings of the …, 2019 - openaccess.thecvf.com
State-of-the-art classifiers have been shown to be largely vulnerable to adversarial
perturbations. One of the most effective strategies to improve robustness is adversarial …

Mma training: Direct input space margin maximization through adversarial training

GW Ding, Y Sharma, KYC Lui, R Huang - arXiv preprint arXiv:1812.02637, 2018 - arxiv.org
We study adversarial robustness of neural networks from a margin maximization
perspective, where margins are defined as the distances from inputs to a classifier's decision …

Reducing excessive margin to achieve a better accuracy vs. robustness trade-off

R Rade, SM Moosavi-Dezfooli - International Conference on …, 2021 - openreview.net
While adversarial training has become the de facto approach for training robust classifiers, it
leads to a drop in accuracy. This has led to prior works postulating that accuracy is …

Relating adversarially robust generalization to flat minima

D Stutz, M Hein, B Schiele - Proceedings of the IEEE/CVF …, 2021 - openaccess.thecvf.com
Adversarial training (AT) has become the de-facto standard to obtain models robust against
adversarial examples. However, AT exhibits severe robust overfitting: cross-entropy loss on …

Feature purification: How adversarial training performs robust deep learning

Z Allen-Zhu, Y Li - 2021 IEEE 62nd Annual Symposium on …, 2022 - ieeexplore.ieee.org
Despite the empirical success of using adversarial training to defend deep learning models
against adversarial perturbations, so far, it still remains rather unclear what the principles are …

Adversarial vulnerability for any classifier

A Fawzi, H Fawzi, O Fawzi - Advances in neural information …, 2018 - proceedings.neurips.cc
Despite achieving impressive performance, state-of-the-art classifiers remain highly
vulnerable to small, imperceptible, adversarial perturbations. This vulnerability has proven …

Disentangling adversarial robustness and generalization

D Stutz, M Hein, B Schiele - Proceedings of the IEEE/CVF …, 2019 - openaccess.thecvf.com
Obtaining deep networks that are robust against adversarial examples and generalize well
is an open problem. A recent hypothesis even states that both robust and accurate models …

Defending against universal perturbations with shared adversarial training

CK Mummadi, T Brox… - Proceedings of the IEEE …, 2019 - openaccess.thecvf.com
Classifiers such as deep neural networks have been shown to be vulnerable against
adversarial perturbations on problems with high-dimensional input space. While adversarial …

[PDF][PDF] Fundamental limits on adversarial robustness

A Fawzi, O Fawzi, P Frossard - Proc. ICML, Workshop on Deep Learning, 2015 - epfl.ch
The goal of this paper is to analyze an intriguing phenomenon recently discovered in deep
networks, that is their instability to adversarial perturbations (Szegedy et al., 2014). We …

Robustness may be at odds with accuracy

D Tsipras, S Santurkar, L Engstrom, A Turner… - arXiv preprint arXiv …, 2018 - arxiv.org
We show that there may exist an inherent tension between the goal of adversarial
robustness and that of standard generalization. Specifically, training robust models may not …