Adversarial examples in modern machine learning: A review

RR Wiyatno, A Xu, O Dia, A De Berker - arXiv preprint arXiv:1911.05268, 2019 - arxiv.org
Recent research has found that many families of machine learning models are vulnerable to
adversarial examples: inputs that are specifically designed to cause the target model to …

Adversarial training for free!

A Shafahi, M Najibi, MA Ghiasi, Z Xu… - Advances in neural …, 2019 - proceedings.neurips.cc
Adversarial training, in which a network is trained on adversarial examples, is one of the few
defenses against adversarial attacks that withstands strong attacks. Unfortunately, the high …

Evading defenses to transferable adversarial examples by translation-invariant attacks

Y Dong, T Pang, H Su, J Zhu - Proceedings of the IEEE/CVF …, 2019 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples, which can mislead classifiers
by adding imperceptible perturbations. An intriguing property of adversarial examples is …

Nesterov accelerated gradient and scale invariance for adversarial attacks

J Lin, C Song, K He, L Wang, JE Hopcroft - arXiv preprint arXiv …, 2019 - arxiv.org
Deep learning models are vulnerable to adversarial examples crafted by applying human-
imperceptible perturbations on benign inputs. However, under the black-box setting, most …

Improving transferability of adversarial examples with input diversity

C Xie, Z Zhang, Y Zhou, S Bai, J Wang… - Proceedings of the …, 2019 - openaccess.thecvf.com
Though CNNs have achieved the state-of-the-art performance on various vision tasks, they
are vulnerable to adversarial examples---crafted by adding human-imperceptible …

Certified robustness to adversarial examples with differential privacy

M Lecuyer, V Atlidakis, R Geambasu… - … IEEE symposium on …, 2019 - ieeexplore.ieee.org
Adversarial examples that fool machine learning models, particularly deep neural networks,
have been a topic of intense research interest, with attacks and defenses being developed …

Adversarial examples on graph data: Deep insights into attack and defense

H Wu, C Wang, Y Tyshetskiy, A Docherty, K Lu… - arXiv preprint arXiv …, 2019 - arxiv.org
Graph deep learning models, such as graph convolutional networks (GCN) achieve
remarkable performance for tasks on graph data. Similar to other types of deep models …

Decoupling direction and norm for efficient gradient-based l2 adversarial attacks and defenses

J Rony, LG Hafemann, LS Oliveira… - Proceedings of the …, 2019 - openaccess.thecvf.com
Research on adversarial examples in computer vision tasks has shown that small, often
imperceptible changes to an image can induce misclassification, which has security …

Defense against adversarial attacks using feature scattering-based adversarial training

H Zhang, J Wang - Advances in neural information …, 2019 - proceedings.neurips.cc
We introduce a feature scattering-based adversarial training approach for improving model
robustness against adversarial attacks. Conventional adversarial training approaches …

[PDF][PDF] Nic: Detecting adversarial samples with neural network invariant checking

S Ma, Y Liu - Proceedings of the 26th network and distributed system …, 2019 - par.nsf.gov
Deep Neural Networks (DNN) are vulnerable to adversarial samples that are generated by
perturbing correctly classified inputs to cause DNN models to misbehave (eg …