Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Adversarial machine learning in image classification: A survey toward the defender's perspective

GR Machado, E Silva, RR Goldschmidt - ACM Computing Surveys …, 2021 - dl.acm.org
Deep Learning algorithms have achieved state-of-the-art performance for Image
Classification. For this reason, they have been used even in security-critical applications …

A survey on universal adversarial attack

C Zhang, P Benz, C Lin, A Karjauv, J Wu… - arXiv preprint arXiv …, 2021 - arxiv.org
The intriguing phenomenon of adversarial examples has attracted significant attention in
machine learning and what might be more surprising to the community is the existence of …

Multimodal safety-critical scenarios generation for decision-making algorithms evaluation

W Ding, B Chen, B Li, KJ Eun… - IEEE Robotics and …, 2021 - ieeexplore.ieee.org
Existing neural network-based autonomous systems are shown to be vulnerable against
adversarial attacks, therefore sophisticated evaluation of their robustness is of great …

Data-free universal adversarial perturbation and black-box attack

C Zhang, P Benz, A Karjauv… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Universal adversarial perturbation (UAP), ie a single perturbation to fool the network for most
images, is widely recognized as a more practical attack because the UAP can be generated …

Towards a robust deep neural network against adversarial texts: A survey

W Wang, R Wang, L Wang, Z Wang… - ieee transactions on …, 2021 - ieeexplore.ieee.org
Deep neural networks (DNNs) have achieved remarkable success in various tasks (eg,
image classification, speech recognition, and natural language processing (NLP)). However …

Do input gradients highlight discriminative features?

H Shah, P Jain, P Netrapalli - Advances in Neural …, 2021 - proceedings.neurips.cc
Post-hoc gradient-based interpretability methods [Simonyan et al., 2013, Smilkov et al.,
2017] that provide instance-specific explanations of model predictions are often based on …

Distilling robust and non-robust features in adversarial examples by information bottleneck

J Kim, BK Lee, YM Ro - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Adversarial examples, generated by carefully crafted perturbation, have attracted
considerable attention in research fields. Recent works have argued that the existence of the …

Towards evaluating the robustness of deep diagnostic models by adversarial attack

M Xu, T Zhang, Z Li, M Liu, D Zhang - Medical Image Analysis, 2021 - Elsevier
Deep learning models (with neural networks) have been widely used in challenging tasks
such as computer-aided disease diagnosis based on medical images. Recent studies have …

Universal adversarial perturbations through the lens of deep steganography: Towards a fourier perspective

C Zhang, P Benz, A Karjauv, IS Kweon - Proceedings of the AAAI …, 2021 - ojs.aaai.org
The booming interest in adversarial attacks stems from a misalignment between human
vision and a deep neural network (DNN),\ie~ a human imperceptible perturbation fools the …