Improving the transferability of adversarial samples with adversarial transformations

W Wu, Y Su, MR Lyu, I King - Proceedings of the IEEE/CVF …, 2021 - openaccess.thecvf.com
Although deep neural networks (DNNs) have achieved tremendous performance in diverse
vision challenges, they are surprisingly susceptible to adversarial examples, which are born …

Query-efficient black-box attack by active learning

L Pengcheng, J Yi, L Zhang - 2018 IEEE International …, 2018 - ieeexplore.ieee.org
Deep neural network (DNN) as a popular machine learning model is found to be vulnerable
to adversarial attack. This attack constructs adversarial examples by adding small …

Exploring effective data for surrogate training towards black-box attack

X Sun, G Cheng, H Li, L Pei… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Without access to the training data where a black-box victim model is deployed, training a
surrogate model for black-box adversarial attack is still a struggle. In terms of data, we …

An efficient adversarial example generation algorithm based on an accelerated gradient iterative fast gradient

J Liu, Q Zhang, K Mo, X Xiang, J Li, D Cheng… - Computer Standards & …, 2022 - Elsevier
Most existing deep neural networks are susceptible to the influence of adversarial examples,
which may cause them to output incorrect prediction results. An adversarial example is the …

Generating adversarial examples with adversarial networks

C Xiao, B Li, JY Zhu, W He, M Liu, D Song - arXiv preprint arXiv …, 2018 - arxiv.org
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples
resulting from adding small-magnitude perturbations to inputs. Such adversarial examples …

Enhancing adversarial example transferability with an intermediate level attack

Q Huang, I Katsman, H He, Z Gu… - Proceedings of the …, 2019 - openaccess.thecvf.com
Neural networks are vulnerable to adversarial examples, malicious inputs crafted to fool
trained models. Adversarial examples often exhibit black-box transfer, meaning that …

Improving transferability of adversarial examples with input diversity

C Xie, Z Zhang, Y Zhou, S Bai, J Wang… - Proceedings of the …, 2019 - openaccess.thecvf.com
Though CNNs have achieved the state-of-the-art performance on various vision tasks, they
are vulnerable to adversarial examples---crafted by adding human-imperceptible …

Exploring the space of black-box attacks on deep neural networks

AN Bhagoji, W He, B Li, D Song - arXiv preprint arXiv:1712.09491, 2017 - arxiv.org
Existing black-box attacks on deep neural networks (DNNs) so far have largely focused on
transferability, where an adversarial instance generated for a locally trained model can" …

Transferable adversarial perturbations

W Zhou, X Hou, Y Chen, M Tang… - Proceedings of the …, 2018 - openaccess.thecvf.com
State-of-the-art deep neural network classifiers are highly vulnerable to adversarial
examples which are designed to mislead classifiers with a very small perturbation. However …

Enhancing the transferability of adversarial attacks through variance tuning

X Wang, K He - Proceedings of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples that mislead the models with
imperceptible perturbations. Though adversarial attacks have achieved incredible success …