Collaboration is crucial for reaching collective goals. However, its potential for effectiveness is often undermined by the strategic behavior of individual agents—a fact that is captured by …
The goal of multi-objective optimization (MOO) is to learn under multiple, potentially conflicting, objectives. One widely used technique to tackle MOO is through linear …
K Zhan, X Xiong, Z Guo, T Cai, M Liu - arXiv preprint arXiv:2407.20073, 2024 - arxiv.org
Despite recent advances in transfer learning with multiple source data sets, there still lacks developments for mixture target populations that could be approximated through a …
Multi-distribution or collaborative learning involves learning a single predictor that works well across multiple data distributions, using samples from each during training. Recent …
Modern challenges of robustness, fairness, and decision-making in machine learning have led to the formulation of multi-distribution learning (MDL) frameworks in which a predictor is …
We introduce a fine-grained framework for uncertainty quantification of predictive models under distributional shifts. This framework distinguishes the shift in covariate distributions …
J Wang, Z Ren, R Zhan, Z Zhou - arXiv preprint arXiv:2412.14297, 2024 - arxiv.org
Distributionally robust policy learning aims to find a policy that performs well under the worst- case distributional shift, and yet most existing methods for robust policy learning consider …
E Chen, X Chen, W Jing - arXiv preprint arXiv:2404.15209, 2024 - arxiv.org
In data-driven decision-making in marketing, healthcare, and education, it is desirable to utilize a large amount of data from existing ventures to navigate high-dimensional feature …
Multi-objective optimization (MOO) in deep learning aims to simultaneously optimize multiple conflicting objectives, a challenge frequently encountered in areas like multi-task …