Understanding robust overfitting of adversarial training and beyond

C Yu, B Han, L Shen, J Yu, C Gong… - International …, 2022 - proceedings.mlr.press
Robust overfitting widely exists in adversarial training of deep networks. The exact
underlying reasons for this are still not completely understood. Here, we explore the causes …

Enhancing fine-tuning based backdoor defense with sharpness-aware minimization

M Zhu, S Wei, L Shen, Y Fan… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Backdoor defense, which aims to detect or mitigate the effect of malicious triggers introduced
by attackers, is becoming increasingly critical for machine learning security and integrity …

Defenses in adversarial machine learning: A survey

B Wu, S Wei, M Zhu, M Zheng, Z Zhu, M Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Adversarial phenomenon has been widely observed in machine learning (ML) systems,
especially in those using deep neural networks, describing that ML systems may produce …

On the duality between sharpness-aware minimization and adversarial training

Y Zhang, H He, J Zhu, H Chen, Y Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
Adversarial Training (AT), which adversarially perturb the input samples during training, has
been acknowledged as one of the most effective defenses against adversarial attacks, yet …

Sharpness-aware minimization alone can improve adversarial robustness

Z Wei, J Zhu, Y Zhang - arXiv preprint arXiv:2305.05392, 2023 - arxiv.org
Sharpness-Aware Minimization (SAM) is an effective method for improving generalization
ability by regularizing loss sharpness. In this paper, we explore SAM in the context of …

Fusion of global and local knowledge for personalized federated learning

T Huang, L Shen, Y Sun, W Lin, D Tao - arXiv preprint arXiv:2302.11051, 2023 - arxiv.org
Personalized federated learning, as a variant of federated learning, trains customized
models for clients using their heterogeneously distributed data. However, it is still …

Eliminating catastrophic overfitting via abnormal adversarial examples regularization

R Lin, C Yu, T Liu - Advances in Neural Information …, 2024 - proceedings.neurips.cc
Single-step adversarial training (SSAT) has demonstrated the potential to achieve both
efficiency and robustness. However, SSAT suffers from catastrophic overfitting (CO), a …

On the Over-Memorization During Natural, Robust and Catastrophic Overfitting

R Lin, C Yu, B Han, T Liu - arXiv preprint arXiv:2310.08847, 2023 - arxiv.org
Overfitting negatively impacts the generalization ability of deep neural networks (DNNs) in
both natural and adversarial training. Existing methods struggle to consistently address …

Soften to Defend: Towards Adversarial Robustness via Self-Guided Label Refinement

Z Li, D Yu, L Wei, C Jin, Y Zhang… - Proceedings of the …, 2024 - openaccess.thecvf.com
Adversarial training (AT) is currently one of the most effective ways to obtain the robustness
of deep neural networks against adversarial attacks. However most AT methods suffer from …

[HTML][HTML] Aroid: Improving adversarial robustness through online instance-wise data augmentation

L Li, J Qiu, M Spratling - International Journal of Computer Vision, 2024 - Springer
Deep neural networks are vulnerable to adversarial examples. Adversarial training (AT) is
an effective defense against adversarial examples. However, AT is prone to overfitting which …