A survey on adversarial attacks in computer vision: Taxonomy, visualization and future directions

T Long, Q Gao, L Xu, Z Zhou - Computers & Security, 2022 - Elsevier
Deep learning has been widely applied in various fields such as computer vision, natural
language processing, and data mining. Although deep learning has achieved significant …

Adversarial deep learning: A survey on adversarial attacks and defense mechanisms on image classification

SY Khamaiseh, D Bagagem, A Al-Alaj… - IEEE …, 2022 - ieeexplore.ieee.org
The popularity of adapting deep neural networks (DNNs) in solving hard problems has
increased substantially. Specifically, in the field of computer vision, DNNs are becoming a …

Robustbench: a standardized adversarial robustness benchmark

F Croce, M Andriushchenko, V Sehwag… - arXiv preprint arXiv …, 2020 - arxiv.org
As a research community, we are still lacking a systematic understanding of the progress on
adversarial robustness which often makes it hard to identify the most promising ideas in …

Minimally distorted adversarial examples with a fast adaptive boundary attack

F Croce, M Hein - International Conference on Machine …, 2020 - proceedings.mlr.press
The evaluation of robustness against adversarial manipulation of neural networks-based
classifiers is mainly tested with empirical attacks as methods for the exact computation, even …

Skip connections matter: On the transferability of adversarial examples generated with resnets

D Wu, Y Wang, ST Xia, J Bailey, X Ma - arXiv preprint arXiv:2002.05990, 2020 - arxiv.org
Skip connections are an essential component of current state-of-the-art deep neural
networks (DNNs) such as ResNet, WideResNet, DenseNet, and ResNeXt. Despite their …

Sparse and imperceivable adversarial attacks

F Croce, M Hein - … of the IEEE/CVF international conference …, 2019 - openaccess.thecvf.com
Neural networks have been proven to be vulnerable to a variety of adversarial attacks. From
a safety perspective, highly sparse adversarial attacks are particularly dangerous. On the …

Certified defenses for adversarial patches

P Chiang, R Ni, A Abdelkader, C Zhu, C Studer… - arXiv preprint arXiv …, 2020 - arxiv.org
Adversarial patch attacks are among one of the most practical threat models against real-
world computer vision systems. This paper studies certified and empirical defenses against …

On improving adversarial transferability of vision transformers

M Naseer, K Ranasinghe, S Khan, FS Khan… - arXiv preprint arXiv …, 2021 - arxiv.org
Vision transformers (ViTs) process input images as sequences of patches via self-attention;
a radically different architecture than convolutional neural networks (CNNs). This makes it …

Grnn: generative regression neural network—a data leakage attack for federated learning

H Ren, J Deng, X Xie - ACM Transactions on Intelligent Systems and …, 2022 - dl.acm.org
Data privacy has become an increasingly important issue in Machine Learning (ML), where
many approaches have been developed to tackle this challenge, eg, cryptography …

[HTML][HTML] 雷达像智能识别对抗研究进展

高勋章, 张志伟, 刘梅, 龚政辉, 黎湘 - 雷达学报, 2023 - radars.ac.cn
基于深度神经网络的雷达像智能识别技术已经成为雷达信息处理领域的前沿和热点. 然而,
神经网络分类模型易受到对抗攻击的威胁. 攻击者可以在隐蔽的条件下误导智能目标识别模型做 …