Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity

S Zhou, C Liu, D Ye, T Zhu, W Zhou, PS Yu - ACM Computing Surveys, 2022 - dl.acm.org
The outstanding performance of deep neural networks has promoted deep learning
applications in a broad set of domains. However, the potential risks caused by adversarial …

Do adversarially robust imagenet models transfer better?

H Salman, A Ilyas, L Engstrom… - Advances in Neural …, 2020 - proceedings.neurips.cc
Transfer learning is a widely-used paradigm in deep learning, where models pre-trained on
standard datasets can be efficiently adapted to downstream tasks. Typically, better pre …

Adversarial weight perturbation helps robust generalization

D Wu, ST Xia, Y Wang - Advances in neural information …, 2020 - proceedings.neurips.cc
The study on improving the robustness of deep neural networks against adversarial
examples grows rapidly in recent years. Among them, adversarial training is the most …

Adversarial examples improve image recognition

C Xie, M Tan, B Gong, J Wang… - Proceedings of the …, 2020 - openaccess.thecvf.com
Adversarial examples are commonly viewed as a threat to ConvNets. Here we present an
opposite perspective: adversarial examples can be used to improve image recognition …

Improving adversarial robustness requires revisiting misclassified examples

Y Wang, D Zou, J Yi, J Bailey, X Ma… - … conference on learning …, 2019 - openreview.net
Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by
imperceptible perturbations. A range of defense techniques have been proposed to improve …

Adversarial examples are not bugs, they are features

A Ilyas, S Santurkar, D Tsipras… - Advances in neural …, 2019 - proceedings.neurips.cc
Adversarial examples have attracted significant attention in machine learning, but the
reasons for their existence and pervasiveness remain unclear. We demonstrate that …

Taxonomy of machine learning safety: A survey and primer

S Mohseni, H Wang, C Xiao, Z Yu, Z Wang… - ACM Computing …, 2022 - dl.acm.org
The open-world deployment of Machine Learning (ML) algorithms in safety-critical
applications such as autonomous vehicles needs to address a variety of ML vulnerabilities …

Attacks which do not kill training make adversarial learning stronger

J Zhang, X Xu, B Han, G Niu, L Cui… - International …, 2020 - proceedings.mlr.press
Adversarial training based on the minimax formulation is necessary for obtaining adversarial
robustness of trained models. However, it is conservative or even pessimistic so that it …

[HTML][HTML] Interpolation consistency training for semi-supervised learning

V Verma, K Kawaguchi, A Lamb, J Kannala, A Solin… - Neural Networks, 2022 - Elsevier
Abstract We introduce Interpolation Consistency Training (ICT), a simple and computation
efficient algorithm for training Deep Neural Networks in the semi-supervised learning …

Exploring architectural ingredients of adversarially robust deep neural networks

H Huang, Y Wang, S Erfani, Q Gu… - Advances in Neural …, 2021 - proceedings.neurips.cc
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks. A range of
defense methods have been proposed to train adversarially robust DNNs, among which …