Evading defenses to transferable adversarial examples by translation-invariant attacks

Y Dong, T Pang, H Su, J Zhu - Proceedings of the IEEE/CVF …, 2019 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples, which can mislead classifiers
by adding imperceptible perturbations. An intriguing property of adversarial examples is …

Transferable adversarial perturbations

W Zhou, X Hou, Y Chen, M Tang… - Proceedings of the …, 2018 - openaccess.thecvf.com
State-of-the-art deep neural network classifiers are highly vulnerable to adversarial
examples which are designed to mislead classifiers with a very small perturbation. However …

Rethinking model ensemble in transfer-based adversarial attacks

H Chen, Y Zhang, Y Dong, X Yang, H Su… - arXiv preprint arXiv …, 2023 - arxiv.org
It is widely recognized that deep learning models lack robustness to adversarial examples.
An intriguing property of adversarial examples is that they can transfer across different …

Curls & whey: Boosting black-box adversarial attacks

Y Shi, S Wang, Y Han - … of the IEEE/CVF Conference on …, 2019 - openaccess.thecvf.com
Image classifiers based on deep neural networks suffer from harassment caused by
adversarial examples. Two defects exist in black-box iterative attacks that generate …

Admix: Enhancing the transferability of adversarial attacks

X Wang, X He, J Wang, K He - Proceedings of the IEEE/CVF …, 2021 - openaccess.thecvf.com
Deep neural networks are known to be extremely vulnerable to adversarial examples under
white-box setting. Moreover, the malicious adversaries crafted on the surrogate (source) …

Boosting adversarial transferability via gradient relevance attack

H Zhu, Y Ren, X Sui, L Yang… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Plentiful adversarial attack researches have revealed the fragility of deep neural networks
(DNNs), where the imperceptible perturbations can cause drastic changes in the output …

Boosting the transferability of adversarial attacks with reverse adversarial perturbation

Z Qin, Y Fan, Y Liu, L Shen, Y Zhang… - Advances in neural …, 2022 - proceedings.neurips.cc
Deep neural networks (DNNs) have been shown to be vulnerable to adversarial examples,
which can produce erroneous predictions by injecting imperceptible perturbations. In this …

Enhancing the transferability of adversarial attacks through variance tuning

X Wang, K He - Proceedings of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples that mislead the models with
imperceptible perturbations. Though adversarial attacks have achieved incredible success …

A self-supervised approach for adversarial robustness

M Naseer, S Khan, M Hayat… - Proceedings of the …, 2020 - openaccess.thecvf.com
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs)
based vision systems eg, for classification, segmentation and object detection. The …

An adaptive model ensemble adversarial attack for boosting adversarial transferability

B Chen, J Yin, S Chen, B Chen… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
While the transferability property of adversarial examples allows the adversary to perform
black-box attacks ie, the attacker has no knowledge about the target model), the transfer …