Segpgd: An effective and efficient adversarial attack for evaluating and boosting segmentation robustness

J Gu, H Zhao, V Tresp, PHS Torr - European Conference on Computer …, 2022 - Springer
Deep neural network-based image classifications are vulnerable to adversarial
perturbations. The image classifications can be easily fooled by adding artificial small and …

Towards efficient adversarial training on vision transformers

B Wu, J Gu, Z Li, D Cai, X He, W Liu - European Conference on Computer …, 2022 - Springer
Abstract Vision Transformer (ViT), as a powerful alternative to Convolutional Neural Network
(CNN), has received much attention. Recent work showed that ViTs are also vulnerable to …

Prior-guided adversarial initialization for fast adversarial training

X Jia, Y Zhang, X Wei, B Wu, K Ma, J Wang… - European Conference on …, 2022 - Springer
Fast adversarial training (FAT) effectively improves the efficiency of standard adversarial
training (SAT). However, initial FAT encounters catastrophic overfitting, ie, the robust …

Efficient and effective augmentation strategy for adversarial training

S Addepalli, S Jain - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Abstract Adversarial training of Deep Neural Networks is known to be significantly more data-
hungry when compared to standard training. Furthermore, complex data augmentations …

Make some noise: Reliable and efficient single-step adversarial training

P de Jorge Aranda, A Bibi, R Volpi… - Advances in …, 2022 - proceedings.neurips.cc
Abstract Recently, Wong et al.(2020) showed that adversarial training with single-step FGSM
leads to a characteristic failure mode named catastrophic overfitting (CO), in which a model …

Comparative analysis of binary and one-class classification techniques for credit card fraud data

JL Leevy, J Hancock, TM Khoshgoftaar - Journal of Big Data, 2023 - Springer
The yearly increase in incidents of credit card fraud can be attributed to the rapid growth of e-
commerce. To address this issue, effective fraud detection methods are essential. Our …

Improving fast adversarial training with prior-guided knowledge

X Jia, Y Zhang, X Wei, B Wu, K Ma… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Fast adversarial training (FAT) is an efficient method to improve robustness in white-box
attack scenarios. However, the original FAT suffers from catastrophic overfitting, which …

Shift from texture-bias to shape-bias: Edge deformation-based augmentation for robust object recognition

X He, Q Lin, C Luo, W Xie, S Song… - Proceedings of the …, 2023 - openaccess.thecvf.com
Recent studies have shown the vulnerability of CNNs under perturbation noises, which is
partially caused by the reason that the well-trained CNNs are too biased toward the object …

Learning defense transformations for counterattacking adversarial examples

J Li, S Zhang, J Cao, M Tan - Neural Networks, 2023 - Elsevier
Deep neural networks (DNNs) are vulnerable to adversarial examples with small
perturbations. Adversarial defense thus has been an important means which improves the …

Towards intrinsic adversarial robustness through probabilistic training

J Dong, L Yang, Y Wang, X Xie… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Modern deep neural networks have made numerous breakthroughs in real-world
applications, yet they remain vulnerable to some imperceptible adversarial perturbations …