Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make …
Datasets with significant proportions of noisy (incorrect) class labels present challenges for training accurate Deep Neural Networks (DNNs). We propose a new perspective for …
Deep neural networks progressively transform their inputs across multiple processing layers. What are the geometrical properties of the representations learned by these networks? Here …
D Stutz, M Hein, B Schiele - Proceedings of the IEEE/CVF …, 2019 - openaccess.thecvf.com
Obtaining deep networks that are robust against adversarial examples and generalize well is an open problem. A recent hypothesis even states that both robust and accurate models …
D Stutz, M Hein, B Schiele - International Conference on …, 2020 - proceedings.mlr.press
Adversarial training yields robust models against a specific threat model, eg, $ L_\infty $ adversarial examples. Typically robustness does not generalize to previously unseen threat …
J Jia, Y Liu, X Cao, NZ Gong - Proceedings of the AAAI Conference on …, 2022 - ojs.aaai.org
Data poisoning attacks and backdoor attacks aim to corrupt a machine learning classifier via modifying, adding, and/or removing some carefully selected training examples, such that the …
D Stutz, M Hein, B Schiele - Proceedings of the IEEE/CVF …, 2021 - openaccess.thecvf.com
Adversarial training (AT) has become the de-facto standard to obtain models robust against adversarial examples. However, AT exhibits severe robust overfitting: cross-entropy loss on …
Y Wang, S Jha, K Chaudhuri - International Conference on …, 2018 - proceedings.mlr.press
Motivated by safety-critical applications, test-time attacks on classifiers via adversarial examples has recently received a great deal of attention. However, there is a general lack of …
Geometric descriptions of deep neural networks (DNNs) have the potential to uncover core representational principles of computational models in neuroscience. Here we examined the …