Enhancing the transferability of adversarial attacks through variance tuning

X Wang, K He - Proceedings of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples that mislead the models with
imperceptible perturbations. Though adversarial attacks have achieved incredible success …

Boosting adversarial transferability through enhanced momentum

X Wang, J Lin, H Hu, J Wang, K He - arXiv preprint arXiv:2103.10609, 2021 - arxiv.org
Deep learning models are known to be vulnerable to adversarial examples crafted by
adding human-imperceptible perturbations on benign images. Many existing adversarial …

Boosting adversarial transferability via gradient relevance attack

H Zhu, Y Ren, X Sui, L Yang… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Plentiful adversarial attack researches have revealed the fragility of deep neural networks
(DNNs), where the imperceptible perturbations can cause drastic changes in the output …

Feature importance-aware transferable adversarial attacks

Z Wang, H Guo, Z Zhang, W Liu… - Proceedings of the …, 2021 - openaccess.thecvf.com
Transferability of adversarial examples is of central importance for attacking an unknown
model, which facilitates adversarial attacks in more practical scenarios, eg, blackbox attacks …

Evading defenses to transferable adversarial examples by translation-invariant attacks

Y Dong, T Pang, H Su, J Zhu - Proceedings of the IEEE/CVF …, 2019 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples, which can mislead classifiers
by adding imperceptible perturbations. An intriguing property of adversarial examples is …

Transferable adversarial attack based on integrated gradients

Y Huang, AWK Kong - arXiv preprint arXiv:2205.13152, 2022 - arxiv.org
The vulnerability of deep neural networks to adversarial examples has drawn tremendous
attention from the community. Three approaches, optimizing standard objective functions …

Curls & whey: Boosting black-box adversarial attacks

Y Shi, S Wang, Y Han - … of the IEEE/CVF Conference on …, 2019 - openaccess.thecvf.com
Image classifiers based on deep neural networks suffer from harassment caused by
adversarial examples. Two defects exist in black-box iterative attacks that generate …

Nesterov accelerated gradient and scale invariance for adversarial attacks

J Lin, C Song, K He, L Wang, JE Hopcroft - arXiv preprint arXiv …, 2019 - arxiv.org
Deep learning models are vulnerable to adversarial examples crafted by applying human-
imperceptible perturbations on benign inputs. However, under the black-box setting, most …

Admix: Enhancing the transferability of adversarial attacks

X Wang, X He, J Wang, K He - Proceedings of the IEEE/CVF …, 2021 - openaccess.thecvf.com
Deep neural networks are known to be extremely vulnerable to adversarial examples under
white-box setting. Moreover, the malicious adversaries crafted on the surrogate (source) …

Understanding and enhancing the transferability of adversarial examples

L Wu, Z Zhu, C Tai - arXiv preprint arXiv:1802.09707, 2018 - arxiv.org
State-of-the-art deep neural networks are known to be vulnerable to adversarial examples,
formed by applying small but malicious perturbations to the original inputs. Moreover, the …