Recent advances in adversarial training for adversarial robustness

T Bai, J Luo, J Zhao, B Wen, Q Wang - arXiv preprint arXiv:2102.01356, 2021 - arxiv.org
Adversarial training is one of the most effective approaches defending against adversarial
examples for deep learning models. Unlike other defense strategies, adversarial training …

Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Cross-entropy loss functions: Theoretical analysis and applications

A Mao, M Mohri, Y Zhong - International conference on …, 2023 - proceedings.mlr.press
Cross-entropy is a widely used loss function in applications. It coincides with the logistic loss
applied to the outputs of a neural network, when the softmax is used. But, what guarantees …

Diffusion models for adversarial purification

W Nie, B Guo, Y Huang, C Xiao, A Vahdat… - arXiv preprint arXiv …, 2022 - arxiv.org
Adversarial purification refers to a class of defense methods that remove adversarial
perturbations using a generative model. These methods do not make assumptions on the …

Robustbench: a standardized adversarial robustness benchmark

F Croce, M Andriushchenko, V Sehwag… - arXiv preprint arXiv …, 2020 - arxiv.org
As a research community, we are still lacking a systematic understanding of the progress on
adversarial robustness which often makes it hard to identify the most promising ideas in …

Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks

F Croce, M Hein - International conference on machine …, 2020 - proceedings.mlr.press
The field of defense strategies against adversarial attacks has significantly grown over the
last years, but progress is hampered as the evaluation of adversarial defenses is often …

Improving adversarial robustness requires revisiting misclassified examples

Y Wang, D Zou, J Yi, J Bailey, X Ma… - … conference on learning …, 2019 - openreview.net
Deep neural networks (DNNs) are vulnerable to adversarial examples crafted by
imperceptible perturbations. A range of defense techniques have been proposed to improve …

Rethinking lipschitz neural networks and certified robustness: A boolean function perspective

B Zhang, D Jiang, D He… - Advances in neural …, 2022 - proceedings.neurips.cc
Designing neural networks with bounded Lipschitz constant is a promising way to obtain
certifiably robust classifiers against adversarial examples. However, the relevant progress …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

Exploring architectural ingredients of adversarially robust deep neural networks

H Huang, Y Wang, S Erfani, Q Gu… - Advances in Neural …, 2021 - proceedings.neurips.cc
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks. A range of
defense methods have been proposed to train adversarially robust DNNs, among which …