Understanding and utilizing deep neural networks trained with noisy labels

P Chen, BB Liao, G Chen… - … conference on machine …, 2019 - proceedings.mlr.press
Noisy labels are ubiquitous in real-world datasets, which poses a challenge for robustly
training deep neural networks (DNNs) as DNNs usually have the high capacity to memorize …

Characterizing adversarial subspaces using local intrinsic dimensionality

X Ma, B Li, Y Wang, SM Erfani, S Wijewickrema… - arXiv preprint arXiv …, 2018 - arxiv.org
Deep Neural Networks (DNNs) have recently been shown to be vulnerable against
adversarial examples, which are carefully crafted instances that can mislead DNNs to make …

Dimensionality-driven learning with noisy labels

X Ma, Y Wang, ME Houle, S Zhou… - International …, 2018 - proceedings.mlr.press
Datasets with significant proportions of noisy (incorrect) class labels present challenges for
training accurate Deep Neural Networks (DNNs). We propose a new perspective for …

Intrinsic dimension of data representations in deep neural networks

A Ansuini, A Laio, JH Macke… - Advances in Neural …, 2019 - proceedings.neurips.cc
Deep neural networks progressively transform their inputs across multiple processing layers.
What are the geometrical properties of the representations learned by these networks? Here …

Disentangling adversarial robustness and generalization

D Stutz, M Hein, B Schiele - Proceedings of the IEEE/CVF …, 2019 - openaccess.thecvf.com
Obtaining deep networks that are robust against adversarial examples and generalize well
is an open problem. A recent hypothesis even states that both robust and accurate models …

Confidence-calibrated adversarial training: Generalizing to unseen attacks

D Stutz, M Hein, B Schiele - International Conference on …, 2020 - proceedings.mlr.press
Adversarial training yields robust models against a specific threat model, eg, $ L_\infty $
adversarial examples. Typically robustness does not generalize to previously unseen threat …

Certified robustness of nearest neighbors against data poisoning and backdoor attacks

J Jia, Y Liu, X Cao, NZ Gong - Proceedings of the AAAI Conference on …, 2022 - ojs.aaai.org
Data poisoning attacks and backdoor attacks aim to corrupt a machine learning classifier via
modifying, adding, and/or removing some carefully selected training examples, such that the …

Relating adversarially robust generalization to flat minima

D Stutz, M Hein, B Schiele - Proceedings of the IEEE/CVF …, 2021 - openaccess.thecvf.com
Adversarial training (AT) has become the de-facto standard to obtain models robust against
adversarial examples. However, AT exhibits severe robust overfitting: cross-entropy loss on …

Analyzing the robustness of nearest neighbors to adversarial examples

Y Wang, S Jha, K Chaudhuri - International Conference on …, 2018 - proceedings.mlr.press
Motivated by safety-critical applications, test-time attacks on classifiers via adversarial
examples has recently received a great deal of attention. However, there is a general lack of …

[HTML][HTML] High-performing neural network models of visual cortex benefit from high latent dimensionality

E Elmoznino, MF Bonner - PLOS Computational Biology, 2024 - journals.plos.org
Geometric descriptions of deep neural networks (DNNs) have the potential to uncover core
representational principles of computational models in neuroscience. Here we examined the …