In this work, we improve upon the stepwise analysis of noisy iterative learning algorithms initiated by Pensia, Jog, and Loh (2018) and recently extended by Bu, Zou, and Veeravalli …
The information-theoretic framework of Russo and Zou (2016) and Xu and Raginsky (2017) provides bounds on the generalization error of a learning algorithm in terms of the mutual …
G Aminian, Y Bu, L Toni… - Advances in Neural …, 2021 - proceedings.neurips.cc
Various approaches have been developed to upper bound the generalization error of a supervised learning algorithm. However, existing bounds are often loose and lack of …
A Asadi, E Abbe, S Verdú - Advances in Neural Information …, 2018 - proceedings.neurips.cc
Bounding the generalization error of learning algorithms has a long history, which yet falls short in explaining various generalization successes including those of deep learning. Two …
A fundamental question in theoretical machine learning is generalization. Over the past decades, the PAC-Bayesian approach has been established as a flexible framework to …
In this work, the probability of an event under some joint distribution is bounded by measuring it with the product of the marginals instead (which is typically easier to analyze) …
S Yagli, A Dytso, HV Poor - 2020 IEEE 21st International …, 2020 - ieeexplore.ieee.org
Machine learning algorithms operating on mobile networks can be characterized into three different categories. First is the classical situation in which the end-user devices send their …
We capitalize on the Wasserstein distance to obtain two information-theoretic bounds on the generalization error of learning algorithms. First, we specialize the Wasserstein distance into …
J Chen, H Chen, B Gu, H Deng - Advances in Neural …, 2024 - proceedings.neurips.cc
Federated zeroth-order optimization (FedZO) algorithm enjoys the advantages of both zeroth- order optimization and federated learning, and has shown exceptional performance on …