Stability analysis and generalization bounds of adversarial training

J Xiao, Y Fan, R Sun, J Wang… - Advances in Neural …, 2022 - proceedings.neurips.cc
In adversarial machine learning, deep neural networks can fit the adversarial examples on
the training dataset but have poor generalization ability on the test set. This phenomenon is …

Fantastic robustness measures: the secrets of robust generalization

H Kim, J Park, Y Choi, J Lee - Advances in Neural …, 2024 - proceedings.neurips.cc
Adversarial training has become the de-facto standard method for improving the robustness
of models against adversarial examples. However, robust overfitting remains a significant …

Advancing example exploitation can alleviate critical challenges in adversarial training

Y Ge, Y Li, K Han, J Zhu, X Long - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Deep neural networks have achieved remarkable results across various tasks. However,
they are susceptible to adversarial examples, which are generated by adding adversarial …

Fast Adversarial Training with Smooth Convergence

M Zhao, L Zhang, Y Kong, B Yin - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Fast adversarial training (FAT) is beneficial for improving the adversarial robustness of
neural networks. However, previous FAT work has encountered a significant issue known as …

Eliminating catastrophic overfitting via abnormal adversarial examples regularization

R Lin, C Yu, T Liu - Advances in Neural Information …, 2024 - proceedings.neurips.cc
Single-step adversarial training (SSAT) has demonstrated the potential to achieve both
efficiency and robustness. However, SSAT suffers from catastrophic overfitting (CO), a …

On the Over-Memorization During Natural, Robust and Catastrophic Overfitting

R Lin, C Yu, B Han, T Liu - arXiv preprint arXiv:2310.08847, 2023 - arxiv.org
Overfitting negatively impacts the generalization ability of deep neural networks (DNNs) in
both natural and adversarial training. Existing methods struggle to consistently address …

Robust Overfitting Does Matter: Test-Time Adversarial Purification With FGSM

L Tang, L Zhang - … of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
Numerous studies have demonstrated the susceptibility of deep neural networks (DNNs) to
subtle adversarial perturbations prompting the development of many advanced adversarial …

Rethinking the validity of perturbation in single-step adversarial training

Y Ge, Y Li, K Han - Pattern Recognition, 2024 - Elsevier
The neural network model has the drawback of making incorrect predictions under the
influence of slight adversarial perturbations. Single-step adversarial training (AT) is an …

Stability and Generalization in Free Adversarial Training

X Cheng, K Fu, F Farnia - arXiv preprint arXiv:2404.08980, 2024 - arxiv.org
While adversarial training methods have resulted in significant improvements in the deep
neural nets' robustness against norm-bounded adversarial perturbations, their …

Improving Fast Adversarial Training via Self-Knowledge Guidance

C Jiang, J Wang, M Dong, J Gui, X Shi, Y Cao… - arXiv preprint arXiv …, 2024 - arxiv.org
Adversarial training has achieved remarkable advancements in defending against
adversarial attacks. Among them, fast adversarial training (FAT) is gaining attention for its …