Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

[HTML][HTML] Adversarial attacks and defenses in images, graphs and text: A review

H Xu, Y Ma, HC Liu, D Deb, H Liu, JL Tang… - International journal of …, 2020 - Springer
Deep neural networks (DNN) have achieved unprecedented success in numerous machine
learning tasks in various domains. However, the existence of adversarial examples raises …

The pitfalls of simplicity bias in neural networks

H Shah, K Tamuly, A Raghunathan… - Advances in …, 2020 - proceedings.neurips.cc
Several works have proposed Simplicity Bias (SB)---the tendency of standard training
procedures such as Stochastic Gradient Descent (SGD) to find simple models---to justify why …

Adversarial examples are not bugs, they are features

A Ilyas, S Santurkar, D Tsipras… - Advances in neural …, 2019 - proceedings.neurips.cc
Adversarial examples have attracted significant attention in machine learning, but the
reasons for their existence and pervasiveness remain unclear. We demonstrate that …

[PDF][PDF] Adversarial training for free!

A Shafahi, M Najibi, MA Ghiasi, Z Xu… - Advances in neural …, 2019 - proceedings.neurips.cc
Adversarial training, in which a network is trained on adversarial examples, is one of the few
defenses against adversarial attacks that withstands strong attacks. Unfortunately, the high …

High-frequency component helps explain the generalization of convolutional neural networks

H Wang, X Wu, Z Huang… - Proceedings of the IEEE …, 2020 - openaccess.thecvf.com
We investigate the relationship between the frequency spectrum of image data and the
generalization behavior of convolutional neural networks (CNN). We first notice CNN's …

Adversarial policies: Attacking deep reinforcement learning

A Gleave, M Dennis, C Wild, N Kant, S Levine… - arXiv preprint arXiv …, 2019 - arxiv.org
Deep reinforcement learning (RL) policies are known to be vulnerable to adversarial
perturbations to their observations, similar to adversarial examples for classifiers. However …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

Adversarial training and robustness for multiple perturbations

F Tramer, D Boneh - Advances in neural information …, 2019 - proceedings.neurips.cc
Defenses against adversarial examples, such as adversarial training, are typically tailored to
a single perturbation type (eg, small $\ell_\infty $-noise). For other perturbations, these …

Anomalous example detection in deep learning: A survey

S Bulusu, B Kailkhura, B Li, PK Varshney… - IEEE Access, 2020 - ieeexplore.ieee.org
Deep Learning (DL) is vulnerable to out-of-distribution and adversarial examples resulting in
incorrect outputs. To make DL more robust, several posthoc (or runtime) anomaly detection …