Introduction to online convex optimization

E Hazan - Foundations and Trends® in Optimization, 2016 - nowpublishers.com
This monograph portrays optimization as a process. In many practical applications the
environment is so complex that it is infeasible to lay out a comprehensive theoretical model …

Optimal learners for realizable regression: Pac learning and online learning

I Attias, S Hanneke, A Kalavasis… - Advances in …, 2023 - proceedings.neurips.cc
In this work, we aim to characterize the statistical complexity of realizable regression both in
the PAC learning setting and the online learning setting. Previous work had established the …

Sharper bounds for uniformly stable algorithms

O Bousquet, Y Klochkov… - … on Learning Theory, 2020 - proceedings.mlr.press
Deriving generalization bounds for stable algorithms is a classical question in learning
theory taking its roots in the early works by Vapnik and Chervonenkis (1974) and Rogers …

Towards a unified information-theoretic framework for generalization

M Haghifam, GK Dziugaite… - Advances in Neural …, 2021 - proceedings.neurips.cc
In this work, we investigate the expressiveness of the" conditional mutual information"(CMI)
framework of Steinke and Zakynthinou (2020) and the prospect of using it to provide a …

Stability and Deviation Optimal Risk Bounds with Convergence Rate

Y Klochkov, N Zhivotovskiy - Advances in Neural …, 2021 - proceedings.neurips.cc
The sharpest known high probability generalization bounds for uniformly stable algorithms
(Feldman, Vondrak, NeurIPS 2018, COLT, 2019),(Bousquet, Klochkov, Zhivotovskiy, COLT …

Sample-conditioned hypothesis stability sharpens information-theoretic generalization bounds

Z Wang, Y Mao - Advances in Neural Information …, 2024 - proceedings.neurips.cc
We present new information-theoretic generalization guarantees through the a novel
construction of the" neighboring-hypothesis" matrix and a new family of stability notions …

Minimum description length and generalization guarantees for representation learning

M Sefidgaran, A Zaidi… - Advances in Neural …, 2024 - proceedings.neurips.cc
A major challenge in designing efficient statistical supervised learning algorithms is finding
representations that perform well not only on available training samples but also on unseen …

Compression, generalization and learning

MC Campi, S Garatti - Journal of Machine Learning Research, 2023 - jmlr.org
A compression function is a map that slims down an observational set into a subset of
reduced size, while preserving its informational content. In multiple applications, the …

Adversarially robust learning with unknown perturbation sets

O Montasser, S Hanneke… - Conference on Learning …, 2021 - proceedings.mlr.press
We study the problem of learning predictors that are robust to adversarial examples with
respect to an unknown perturbation set, relying instead on interaction with an adversarial …

The Pick-to-Learn algorithm: Empowering compression for tight generalization bounds and improved post-training performance

D Paccagnan, M Campi… - Advances in Neural …, 2024 - proceedings.neurips.cc
Generalization bounds are valuable both for theory and applications. On the one hand, they
shed light on the mechanisms that underpin the learning processes; on the other, they certify …