Learning a single neuron with adversarial label noise via gradient descent

I Diakonikolas, V Kontonis… - … on Learning Theory, 2022 - proceedings.mlr.press
We study the fundamental problem of learning a single neuron, ie, a function of the form
$\x\mapsto\sigma (\vec w\cdot\x) $ for monotone activations $\sigma:\R\mapsto\R $, with …

Self-training converts weak learners to strong learners in mixture models

S Frei, D Zou, Z Chen, Q Gu - International Conference on …, 2022 - proceedings.mlr.press
We consider a binary classification problem when the data comes from a mixture of two
rotationally symmetric distributions satisfying concentration and anti-concentration …

Robustness guarantees for adversarially trained neural networks

P Mianjy, R Arora - Advances in neural information …, 2023 - proceedings.neurips.cc
We study robust adversarial training of two-layer neural networks as a bi-level optimization
problem. In particular, for the inner loop that implements the adversarial attack during …

Provable generalization of sgd-trained neural networks of any width in the presence of adversarial label noise

S Frei, Y Cao, Q Gu - International Conference on Machine …, 2021 - proceedings.mlr.press
We consider a one-hidden-layer leaky ReLU network of arbitrary width trained by stochastic
gradient descent (SGD) following an arbitrary initialization. We prove that SGD produces …

Interpolation can hurt robust generalization even when there is no noise

K Donhauser, A Tifrea, M Aerni… - Advances in Neural …, 2021 - proceedings.neurips.cc
Numerous recent works show that overparameterization implicitly reduces variance for min-
norm interpolators and max-margin classifiers. These findings suggest that ridge …

Adversarial Training: A Survey

M Zhao, L Zhang, J Ye, H Lu, B Yin, X Wang - arXiv preprint arXiv …, 2024 - arxiv.org
Adversarial training (AT) refers to integrating adversarial examples--inputs altered with
imperceptible perturbations that can significantly impact model predictions--into the training …

Synergy-of-experts: Collaborate to improve adversarial robustness

S Cui, J Zhang, J Liang, B Han… - Advances in Neural …, 2022 - proceedings.neurips.cc
Learning adversarially robust models require invariant predictions to a small neighborhood
of its natural inputs, often encountering insufficient model capacity. There is research …

Benign overfitting in adversarially robust linear classification

J Chen, Y Cao, Q Gu - Uncertainty in Artificial Intelligence, 2023 - proceedings.mlr.press
Benign overfitting, where classifiers memorize noisy training data yet still achieve a good
generalization performance, has drawn great attention in the machine learning community …

The Surprising Harmfulness of Benign Overfitting for Adversarial Robustness

Y Hao, T Zhang - arXiv preprint arXiv:2401.12236, 2024 - arxiv.org
Recent empirical and theoretical studies have established the generalization capabilities of
large machine learning models that are trained to (approximately or exactly) fit noisy data. In …

On the convergence of certified robust training with interval bound propagation

Y Wang, Z Shi, Q Gu, CJ Hsieh - arXiv preprint arXiv:2203.08961, 2022 - arxiv.org
Interval Bound Propagation (IBP) is so far the base of state-of-the-art methods for training
neural networks with certifiable robustness guarantees when potential adversarial …