The sample complexity of multi-distribution learning

B Peng - The Thirty Seventh Annual Conference on Learning …, 2024 - proceedings.mlr.press
Multi-distribution learning generalizes the classic PAC learning to handle data coming from
multiple distributions. Given a set of $ k $ data distributions and a hypothesis class of VC …

Platforms for Efficient and Incentive-Aware Collaboration

N Haghtalab, M Qiao, K Yang - Proceedings of the 2025 Annual ACM-SIAM …, 2025 - SIAM
Collaboration is crucial for reaching collective goals. However, its potential for effectiveness
is often undermined by the strategic behavior of individual agents—a fact that is captured by …

Online Mirror Descent for Tchebycheff Scalarization in Multi-Objective Optimization

M Liu, X Zhang, C Xie, K Donahue, H Zhao - arXiv preprint arXiv …, 2024 - arxiv.org
The goal of multi-objective optimization (MOO) is to learn under multiple, potentially
conflicting, objectives. One widely used technique to tackle MOO is through linear …

Transfer Learning Targeting Mixed Population: A Distributional Robust Perspective

K Zhan, X Xiong, Z Guo, T Cai, M Liu - arXiv preprint arXiv:2407.20073, 2024 - arxiv.org
Despite recent advances in transfer learning with multiple source data sets, there still lacks
developments for mixture target populations that could be approximated through a …

Derandomizing Multi-Distribution Learning

KG Larsen, O Montasser, N Zhivotovskiy - arXiv preprint arXiv:2409.17567, 2024 - arxiv.org
Multi-distribution or collaborative learning involves learning a single predictor that works
well across multiple data distributions, using samples from each during training. Recent …

On Calibration in Multi-Distribution Learning

R Verma, V Fischer, E Nalisnick - arXiv preprint arXiv:2412.14142, 2024 - arxiv.org
Modern challenges of robustness, fairness, and decision-making in machine learning have
led to the formulation of multi-distribution learning (MDL) frameworks in which a predictor is …

Not all distributional shifts are equal: Fine-grained robust conformal inference

J Ai, Z Ren - arXiv preprint arXiv:2402.13042, 2024 - arxiv.org
We introduce a fine-grained framework for uncertainty quantification of predictive models
under distributional shifts. This framework distinguishes the shift in covariate distributions …

Distributionally Robust Policy Learning under Concept Drifts

J Wang, Z Ren, R Zhan, Z Zhou - arXiv preprint arXiv:2412.14297, 2024 - arxiv.org
Distributionally robust policy learning aims to find a policy that performs well under the worst-
case distributional shift, and yet most existing methods for robust policy learning consider …

Data-Driven Knowledge Transfer in Batch Learning

E Chen, X Chen, W Jing - arXiv preprint arXiv:2404.15209, 2024 - arxiv.org
In data-driven decision-making in marketing, healthcare, and education, it is desirable to
utilize a large amount of data from existing ventures to navigate high-dimensional feature …

Gradient-Based Multi-Objective Deep Learning: Algorithms, Theories, Applications, and Beyond

W Chen, X Zhang, B Lin, X Lin, H Zhao… - arXiv preprint arXiv …, 2025 - arxiv.org
Multi-objective optimization (MOO) in deep learning aims to simultaneously optimize
multiple conflicting objectives, a challenge frequently encountered in areas like multi-task …