Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign method (FGSM), projected gradient descent (PGD) attacks, and other attack algorithms …
Abstract Vision Transformer (ViT), as a powerful alternative to Convolutional Neural Network (CNN), has received much attention. Recent work showed that ViTs are also vulnerable to …
T Li, Y Wu, S Chen, K Fang… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Single-step adversarial training (AT) has received wide attention as it proved to be both efficient and robust. However, a serious problem of catastrophic overfitting exists, ie, the …
X Jia, Y Zhang, X Wei, B Wu, K Ma, J Wang… - European Conference on …, 2022 - Springer
Fast adversarial training (FAT) effectively improves the efficiency of standard adversarial training (SAT). However, initial FAT encounters catastrophic overfitting, ie, the robust …
Abstract Recently, Wong et al.(2020) showed that adversarial training with single-step FGSM leads to a characteristic failure mode named catastrophic overfitting (CO), in which a model …
X Jia, Y Zhang, B Wu, J Wang… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Adversarial training (AT) has been demonstrated to be effective in improving model robustness by leveraging adversarial examples for training. However, most AT methods are …
J Wang, C Wang, Q Lin, C Luo, C Wu, J Li - Neurocomputing, 2022 - Elsevier
In recent years, researches on adversarial attacks and defense mechanisms have obtained much attention. It's observed that adversarial examples crafted with small malicious …
Adversarial training is widely used to improve the robustness of deep neural networks to adversarial attack. However, adversarial training is prone to overfitting, and the cause is far …
Although current deep learning techniques have yielded superior performance on various computer vision tasks, yet they are still vulnerable to adversarial examples. Adversarial …