Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

A survey on adversarial attacks in computer vision: Taxonomy, visualization and future directions

T Long, Q Gao, L Xu, Z Zhou - Computers & Security, 2022 - Elsevier
Deep learning has been widely applied in various fields such as computer vision, natural
language processing, and data mining. Although deep learning has achieved significant …

On evaluating adversarial robustness of large vision-language models

Y Zhao, T Pang, C Du, X Yang, C Li… - Advances in …, 2024 - proceedings.neurips.cc
Large vision-language models (VLMs) such as GPT-4 have achieved unprecedented
performance in response generation, especially with visual inputs, enabling more creative …

Frequency domain model augmentation for adversarial attack

Y Long, Q Zhang, B Zeng, L Gao, X Liu, J Zhang… - European conference on …, 2022 - Springer
For black-box attacks, the gap between the substitute model and the victim model is usually
large, which manifests as a weak attack performance. Motivated by the observation that the …

Enhancing the transferability of adversarial attacks through variance tuning

X Wang, K He - Proceedings of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples that mislead the models with
imperceptible perturbations. Though adversarial attacks have achieved incredible success …

Improving adversarial transferability via neuron attribution-based attacks

J Zhang, W Wu, J Huang, Y Huang… - Proceedings of the …, 2022 - openaccess.thecvf.com
Deep neural networks (DNNs) are known to be vulnerable to adversarial examples. It is thus
imperative to devise effective attack algorithms to identify the deficiencies of DNNs …

De-fake: Detection and attribution of fake images generated by text-to-image generation models

Z Sha, Z Li, N Yu, Y Zhang - Proceedings of the 2023 ACM SIGSAC …, 2023 - dl.acm.org
Text-to-image generation models that generate images based on prompt descriptions have
attracted an increasing amount of attention during the past few months. Despite their …

Feature importance-aware transferable adversarial attacks

Z Wang, H Guo, Z Zhang, W Liu… - Proceedings of the …, 2021 - openaccess.thecvf.com
Transferability of adversarial examples is of central importance for attacking an unknown
model, which facilitates adversarial attacks in more practical scenarios, eg, blackbox attacks …

Shadows can be dangerous: Stealthy and effective physical-world adversarial attack by natural phenomenon

Y Zhong, X Liu, D Zhai, J Jiang… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Estimating the risk level of adversarial examples is essential for safely deploying machine
learning models in the real world. One popular approach for physical-world attacks is to …

Evading defenses to transferable adversarial examples by translation-invariant attacks

Y Dong, T Pang, H Su, J Zhu - Proceedings of the IEEE/CVF …, 2019 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples, which can mislead classifiers
by adding imperceptible perturbations. An intriguing property of adversarial examples is …