Generalization bounds: Perspectives from information theory and PAC-Bayes

F Hellström, G Durisi, B Guedj… - … and Trends® in …, 2025 - nowpublishers.com
A fundamental question in theoretical machine learning is generalization. Over the past
decades, the PAC-Bayesian approach has been established as a flexible framework to …

PAC-Bayes generalisation bounds for heavy-tailed losses through supermartingales

M Haddouche, B Guedj - arXiv preprint arXiv:2210.00928, 2022 - arxiv.org
While PAC-Bayes is now an established learning framework for light-tailed losses (\emph
{eg}, subgaussian or subexponential), its extension to the case of heavy-tailed losses …

MMD-FUSE: Learning and combining kernels for two-sample testing without data splitting

F Biggs, A Schrab, A Gretton - Advances in Neural …, 2024 - proceedings.neurips.cc
We propose novel statistics which maximise the power of a two-sample test based on the
Maximum Mean Discrepancy (MMD), byadapting over the set of kernels used in defining it …

Learning via Wasserstein-based high probability generalisation bounds

P Viallard, M Haddouche… - Advances in Neural …, 2024 - proceedings.neurips.cc
Minimising upper bounds on the population risk or the generalisation gap has been widely
used in structural risk minimisation (SRM)--this is in particular at the core of PAC-Bayesian …

Tighter pac-bayes generalisation bounds by leveraging example difficulty

F Biggs, B Guedj - International Conference on Artificial …, 2023 - proceedings.mlr.press
We introduce a modified version of the excess risk, which can be used to obtain empirically
tighter, faster-rate PAC-Bayesian generalisation bounds. This modified excess risk …

Federated Learning with Nonvacuous Generalisation Bounds

P Jobic, M Haddouche, B Guedj - arXiv preprint arXiv:2310.11203, 2023 - arxiv.org
We introduce a novel strategy to train randomised predictors in federated learning, where
each node of the network aims at preserving its privacy by releasing a local predictor but …

Aggregated f-average Neural Network for Interpretable Ensembling

M Vu, E Chouzenoux, JC Pesquet, IB Ayed - arXiv preprint arXiv …, 2023 - arxiv.org
Ensemble learning leverages multiple models (ie, weak learners) on a common machine
learning task to enhance prediction performance. Basic ensembling approaches average …

Exploring Generalisation Performance through PAC-Bayes

F Biggs - 2024 - discovery.ucl.ac.uk
Generalisation in machine learning refers to the ability of a predictor learned on some
dataset to perform accurately on new, unseen data. Without generalisation, we might be …

Generalisation and expressiveness for over-parameterised neural networks

E Clerico - 2023 - ora.ox.ac.uk
Over-parameterised modern neural networks owe their success to two fundamental
properties: expressive power and generalisation capability. The former refers to the model's …