The enemy of my enemy is my friend: Exploring inverse adversaries for improving adversarial training

J Dong, SM Moosavi-Dezfooli… - Proceedings of the …, 2023 - openaccess.thecvf.com
Although current deep learning techniques have yielded superior performance on various
computer vision tasks, yet they are still vulnerable to adversarial examples. Adversarial …

Are perceptually-aligned gradients a general property of robust classifiers?

S Kaur, J Cohen, ZC Lipton - arXiv preprint arXiv:1910.08640, 2019 - arxiv.org
For a standard convolutional neural network, optimizing over the input pixels to maximize
the score of some target class will generally produce a grainy-looking version of the original …

Do wider neural networks really help adversarial robustness?

B Wu, J Chen, D Cai, X He… - Advances in Neural …, 2021 - proceedings.neurips.cc
Adversarial training is a powerful type of defense against adversarial examples. Previous
empirical results suggest that adversarial training requires wider networks for better …

Why robust generalization in deep learning is difficult: Perspective of expressive power

B Li, J Jin, H Zhong, J Hopcroft… - Advances in Neural …, 2022 - proceedings.neurips.cc
It is well-known that modern neural networks are vulnerable to adversarial examples. To
mitigate this problem, a series of robust learning algorithms have been proposed. However …

Adversarially robust generalization just requires more unlabeled data

R Zhai, T Cai, D He, C Dan, K He, J Hopcroft… - arXiv preprint arXiv …, 2019 - arxiv.org
Neural network robustness has recently been highlighted by the existence of adversarial
examples. Many previous works show that the learned networks do not perform well on …

Towards building more robust models with frequency bias

Q Bu, D Huang, H Cui - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
The vulnerability of deep neural networks to adversarial samples has been a major
impediment to their broad applications, despite their success in various fields. Recently …

Hold me tight! influence of discriminative features on deep network boundaries

G Ortiz-Jimenez, A Modas… - Advances in Neural …, 2020 - proceedings.neurips.cc
Important insights towards the explainability of neural networks reside in the characteristics
of their decision boundaries. In this work, we borrow tools from the field of adversarial …

Understanding robust overfitting of adversarial training and beyond

C Yu, B Han, L Shen, J Yu, C Gong… - International …, 2022 - proceedings.mlr.press
Robust overfitting widely exists in adversarial training of deep networks. The exact
underlying reasons for this are still not completely understood. Here, we explore the causes …

Improving adversarial robustness via guided complement entropy

HY Chen, JH Liang, SC Chang… - Proceedings of the …, 2019 - openaccess.thecvf.com
Adversarial robustness has emerged as an important topic in deep learning as carefully
crafted attack samples can significantly disturb the performance of a model. Many recent …

Defense against universal adversarial perturbations

N Akhtar, J Liu, A Mian - Proceedings of the IEEE …, 2018 - openaccess.thecvf.com
Abstract Recent advances in Deep Learning show the existence of image-agnostic quasi-
imperceptible perturbations that when applied toany'image can fool a state-of-the-art …