Recent advances in adversarial training for adversarial robustness

T Bai, J Luo, J Zhao, B Wen, Q Wang - arXiv preprint arXiv:2102.01356, 2021 - arxiv.org
Adversarial training is one of the most effective approaches defending against adversarial
examples for deep learning models. Unlike other defense strategies, adversarial training …

Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Robustbench: a standardized adversarial robustness benchmark

F Croce, M Andriushchenko, V Sehwag… - arXiv preprint arXiv …, 2020 - arxiv.org
As a research community, we are still lacking a systematic understanding of the progress on
adversarial robustness which often makes it hard to identify the most promising ideas in …

Overfitting in adversarially robust deep learning

L Rice, E Wong, Z Kolter - International conference on …, 2020 - proceedings.mlr.press
It is common practice in deep learning to use overparameterized networks and train for as
long as possible; there are numerous studies that show, both theoretically and empirically …

Trustworthy AI: From principles to practices

B Li, P Qi, B Liu, S Di, J Liu, J Pei, J Yi… - ACM Computing Surveys, 2023 - dl.acm.org
The rapid development of Artificial Intelligence (AI) technology has enabled the deployment
of various systems based on it. However, many current AI systems are found vulnerable to …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

Robust overfitting may be mitigated by properly learned smoothening

T Chen, Z Zhang, S Liu, S Chang… - … Conference on Learning …, 2020 - openreview.net
A recent study (Rice et al., 2020) revealed overfitting to be a dominant phenomenon in
adversarially robust training of deep networks, and that appropriate early-stopping of …

Perceptual adversarial robustness: Defense against unseen threat models

C Laidlaw, S Singla, S Feizi - arXiv preprint arXiv:2006.12655, 2020 - arxiv.org
A key challenge in adversarial robustness is the lack of a precise mathematical
characterization of human perception, used in the very definition of adversarial attacks that …

Adversarial training methods for deep learning: A systematic review

W Zhao, S Alwidian, QH Mahmoud - Algorithms, 2022 - mdpi.com
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign
method (FGSM), projected gradient descent (PGD) attacks, and other attack algorithms …

Confidence-calibrated adversarial training: Generalizing to unseen attacks

D Stutz, M Hein, B Schiele - International Conference on …, 2020 - proceedings.mlr.press
Adversarial training yields robust models against a specific threat model, eg, $ L_\infty $
adversarial examples. Typically robustness does not generalize to previously unseen threat …