Understanding robust overfitting of adversarial training and beyond

C Yu, B Han, L Shen, J Yu, C Gong… - International …, 2022 - proceedings.mlr.press
Robust overfitting widely exists in adversarial training of deep networks. The exact
underlying reasons for this are still not completely understood. Here, we explore the causes …

Revisiting adversarial robustness distillation from the perspective of robust fairness

X Yue, M Ningping, Q Wang… - Advances in Neural …, 2024 - proceedings.neurips.cc
Abstract Adversarial Robustness Distillation (ARD) aims to transfer the robustness of large
teacher models to small student models, facilitating the attainment of robust performance on …

Scaling adversarial training to large perturbation bounds

S Addepalli, S Jain, G Sriramanan… - … on Computer Vision, 2022 - Springer
Abstract The vulnerability of Deep Neural Networks to Adversarial Attacks has fuelled
research towards building robust models. While most Adversarial Training algorithms aim at …

Towards achieving adversarial robustness beyond perceptual limits

S Addepalli, S Jain, G Sriramanan, VB Radhakrishnan - 2021 - openreview.net
The vulnerability of Deep Neural Networks to Adversarial Attacks has fuelled research
towards building robust models. While most Adversarial Training algorithms aim towards …

Revisiting Adversarial Training under Long-Tailed Distributions

X Yue, N Mou, Q Wang, L Zhao - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial attacks leading to erroneous outputs.
Adversarial training has been recognized as one of the most effective methods to counter …

Memorization weights for instance reweighting in adversarial training

J Zhang, Y Hong, Q Zhao - Proceedings of the AAAI Conference on …, 2023 - ojs.aaai.org
Adversarial training is an effective way to defend deep neural networks (DNN) against
adversarial examples. However, there are atypical samples that are rare and hard to learn …

Towards transferable unrestricted adversarial examples with minimum changes

F Liu, C Zhang, H Zhang - 2023 IEEE Conference on Secure …, 2023 - ieeexplore.ieee.org
Transfer-based adversarial example is one of the most important classes of black-box
attacks. However, there is a trade-off between transferability and imperceptibility of the …

One-vs-the-rest loss to focus on important samples in adversarial training

S Kanai, S Yamaguchi, M Yamada… - International …, 2023 - proceedings.mlr.press
This paper proposes a new loss function for adversarial training. Since adversarial training
has difficulties, eg, necessity of high model capacity, focusing on important data points by …

On the impact of hard adversarial instances on overfitting in adversarial training

C Liu, Z Huang, M Salzmann, T Zhang… - arXiv preprint arXiv …, 2021 - arxiv.org
Adversarial training is a popular method to robustify models against adversarial attacks.
However, it exhibits much more severe overfitting than training on clean inputs. In this work …

Alleviating robust overfitting of adversarial training with consistency regularization

S Zhang, H Gao, T Zhang, Y Zhou, Z Wu - arXiv preprint arXiv:2205.11744, 2022 - arxiv.org
Adversarial training (AT) has proven to be one of the most effective ways to defend Deep
Neural Networks (DNNs) against adversarial attacks. However, the phenomenon of robust …