Adapting to online label shift with provable guarantees

Y Bai, YJ Zhang, P Zhao… - Advances in Neural …, 2022 - proceedings.neurips.cc
The standard supervised learning paradigm works effectively when training data shares the
same distribution as the upcoming testing samples. However, this stationary assumption is …

Online adaptation to label distribution shift

R Wu, C Guo, Y Su… - Advances in Neural …, 2021 - proceedings.neurips.cc
Abstract Machine learning models often encounter distribution shifts when deployed in the
real world. In this paper, we focus on adaptation to label distribution shift in the online …

Online label shift: Optimal dynamic regret meets practical algorithms

D Baby, S Garg, TC Yen… - Advances in …, 2024 - proceedings.neurips.cc
This paper focuses on supervised and unsupervised online label shift, where the class
marginals $ Q (y) $ variesbut the class-conditionals $ Q (x| y) $ remain invariant. In the …

Handling New Class in Online Label Shift

YY Qian, Y Bai, ZY Zhang, P Zhao… - 2023 IEEE International …, 2023 - ieeexplore.ieee.org
In many real-world applications, data are continuously accumulated within open
environments. For instance, in disease diagnosis, the prevalence of diseases can vary …

Robust active label correction

J Kremer, F Sha, C Igel - International conference on artificial …, 2018 - proceedings.mlr.press
Active label correction addresses the problem of learning from input data for which noisy
labels are available (eg, from imprecise measurements or crowd-sourcing) and each true …

Coping with label shift via distributionally robust optimisation

J Zhang, A Menon, A Veit, S Bhojanapalli… - arXiv preprint arXiv …, 2020 - arxiv.org
The label shift problem refers to the supervised learning setting where the train and test
label distributions do not match. Existing work addressing label shift usually assumes …

Actively testing your model while it learns: realizing label-efficient learning in practice

D Yu, W Shi, Q Yu - Advances in Neural Information …, 2024 - proceedings.neurips.cc
In active learning (AL), we focus on reducing the data annotation cost from the model
training perspective. However," testing'', which often refers to the model evaluation process …

Incorporating unlabeled data into distributionally robust learning

C Frogner, S Claici, E Chien, J Solomon - arXiv preprint arXiv:1912.07729, 2019 - arxiv.org
We study a robust alternative to empirical risk minimization called distributionally robust
learning (DRL), in which one learns to perform against an adversary who can choose the …

Ssr: An efficient and robust framework for learning with unknown label noise

C Feng, G Tzimiropoulos, I Patras - arXiv preprint arXiv:2111.11288, 2021 - arxiv.org
Despite the large progress in supervised learning with neural networks, there are significant
challenges in obtaining high-quality, large-scale and accurately labelled datasets. In such a …

Progressive stochastic learning for noisy labels

B Han, IW Tsang, L Chen, PY Celina… - IEEE transactions on …, 2018 - ieeexplore.ieee.org
Large-scale learning problems require a plethora of labels that can be efficiently collected
from crowdsourcing services at low cost. However, labels annotated by crowdsourced …