Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

{X-Adv}: Physical adversarial object attacks against x-ray prohibited item detection

A Liu, J Guo, J Wang, S Liang, R Tao, W Zhou… - 32nd USENIX Security …, 2023 - usenix.org
Adversarial attacks are valuable for evaluating the robustness of deep learning models.
Existing attacks are primarily conducted on the visible light spectrum (eg, pixel-wise texture …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

Dual attention suppression attack: Generate adversarial camouflage in physical world

J Wang, A Liu, Z Yin, S Liu… - Proceedings of the …, 2021 - openaccess.thecvf.com
Deep learning models are vulnerable to adversarial examples. As a more threatening type
for practical deep learning systems, physical adversarial examples have received extensive …

Badclip: Dual-embedding guided backdoor attack on multimodal contrastive learning

S Liang, M Zhu, A Liu, B Wu, X Cao… - Proceedings of the …, 2024 - openaccess.thecvf.com
While existing backdoor attacks have successfully infected multimodal contrastive learning
models such as CLIP they can be easily countered by specialized backdoor defenses for …

Patch-wise attack for fooling deep neural network

L Gao, Q Zhang, J Song, X Liu, HT Shen - Computer Vision–ECCV 2020 …, 2020 - Springer
By adding human-imperceptible noise to clean images, the resultant adversarial examples
can fool other unknown models. Features of a pixel extracted by deep neural networks …

Exploring the relationship between architectural design and adversarially robust generalization

A Liu, S Tang, S Liang, R Gong… - Proceedings of the …, 2023 - openaccess.thecvf.com
Adversarial training has been demonstrated to be one of the most effective remedies for
defending adversarial examples, yet it often suffers from the huge robustness generalization …

Bias-based universal adversarial patch attack for automatic check-out

A Liu, J Wang, X Liu, B Cao, C Zhang, H Yu - Computer Vision–ECCV …, 2020 - Springer
Adversarial examples are inputs with imperceptible perturbations that easily misleading
deep neural networks (DNNs). Recently, adversarial patch, with noise confined to a small …

Towards benchmarking and assessing visual naturalness of physical world adversarial attacks

S Li, S Zhang, G Chen, D Wang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Physical world adversarial attack is a highly practical and threatening attack, which fools real
world deep learning systems by generating conspicuous and maliciously crafted real world …

Sparse adversarial attack via perturbation factorization

Y Fan, B Wu, T Li, Y Zhang, M Li, Z Li… - Computer Vision–ECCV …, 2020 - Springer
This work studies the sparse adversarial attack, which aims to generate adversarial
perturbations onto partial positions of one benign image, such that the perturbed image is …