D Stutz, M Hein, B Schiele - Proceedings of the IEEE/CVF …, 2021 - openaccess.thecvf.com
Adversarial training (AT) has become the de-facto standard to obtain models robust against adversarial examples. However, AT exhibits severe robust overfitting: cross-entropy loss on …
Recent work has shown that deep learning models in NLP are highly sensitive to low-level correlations between simple features and specific output labels, leading to overfitting and …
Correctly classifying adversarial examples is an essential but challenging requirement for safely deploying machine learning models. As reported in RobustBench, even the state-of …
F Sheikholeslami, A Lotfi, JZ Kolter - International Conference on …, 2021 - openreview.net
Adversarial attacks against deep networks can be defended against either by building robust classifiers or, by creating classifiers that can\emph {detect} the presence of …
S Baharlouei, F Sheikholeslami… - International …, 2023 - proceedings.mlr.press
This work concerns the development of deep networks that are certifiably robust to adversarial attacks. Joint robust classification-detection was recently introduced as a …
J Huang, H Xie, C Wu, X Xiang - Future Generation Computer Systems, 2023 - Elsevier
Recently, several adversarial training methods have been proposed for rejecting perturbation-based adversarial examples, which enhance the robustness of deep neural …
Recently, there is an emerging interest in adversarially training a classifier with a rejection option (also known as a selective classifier) for boosting adversarial robustness. While …
Adversarial training (AT) is one of the most effective strategies for promoting model robustness, whereas even the state-of-the-art adversarially trained models struggle to …
The softmax cross-entropy loss function has been widely used to train deep models for various tasks. In this work, we propose a Gaussian mixture (GM) loss function for deep …