[图书][B] Learning theory from first principles

F Bach - 2024 - di.ens.fr
This draft textbook is extracted from lecture notes from a class that I have taught
(unfortunately online, but this gave me an opportunity to write more detailed notes) during …

A convex duality framework for GANs

F Farnia, D Tse - Advances in neural information …, 2018 - proceedings.neurips.cc
Generative adversarial network (GAN) is a minimax game between a generator mimicking
the true model and a discriminator distinguishing the samples produced by the generator …

Calibrated surrogate losses for adversarially robust classification

H Bao, C Scott, M Sugiyama - Conference on Learning …, 2020 - proceedings.mlr.press
Adversarially robust classification seeks a classifier that is insensitive to adversarial
perturbations of test patterns. This problem is often formulated via a minimax objective …

Double-weighting for covariate shift adaptation

JI Segovia-Martín, S Mazuelas… - … Conference on Machine …, 2023 - proceedings.mlr.press
Supervised learning is often affected by a covariate shift in which the marginal distributions
of instances (covariates $ x $) of training and testing samples $ p_\text {tr}(x) $ and $ p_\text …

Minimax classification with 0-1 loss and performance guarantees

S Mazuelas, A Zanoni, A Pérez - Advances in Neural …, 2020 - proceedings.neurips.cc
Supervised classification techniques use training samples to find classification rules with
small expected 0-1 loss. Conventional methods achieve efficient learning and out-of-sample …

Consistent polyhedral surrogates for top-k classification and variants

A Thilagar, R Frongillo… - International …, 2022 - proceedings.mlr.press
Top-$ k $ classification is a generalization of multiclass classification used widely in
information retrieval, image classification, and other extreme classification settings. Several …

Generalized maximum entropy for supervised classification

S Mazuelas, Y Shen, A Pérez - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
The maximum entropy principle advocates to evaluate events' probabilities using a
distribution that maximizes entropy among those that satisfy certain expectations' …

Minimax forward and backward learning of evolving tasks with performance guarantees

V Álvarez, S Mazuelas… - Advances in Neural …, 2024 - proceedings.neurips.cc
For a sequence of classification tasks that arrive over time, it is common that tasks are
evolving in the sense that consecutive tasks often have a higher similarity. The incremental …

Minimax risk classifiers with 0-1 loss

S Mazuelas, M Romero, P Grunwald - Journal of Machine Learning …, 2023 - jmlr.org
Supervised classification techniques use training samples to learn a classification rule with
small expected 0-1 loss (error probability). Conventional methods enable tractable learning …

Consistent structured prediction with max-min margin markov networks

A Nowak, F Bach, A Rudi - International Conference on …, 2020 - proceedings.mlr.press
Max-margin methods for binary classification such as the support vector machine (SVM)
have been extended to the structured prediction setting under the name of max-margin …