Self-training methods have gained significant attention in recent years due to their effectiveness in leveraging small labeled datasets and large unlabeled observations for …
H Liu, J Wang, M Long - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Mainstream approaches for unsupervised domain adaptation (UDA) learn domain-invariant representations to narrow the domain shift, which are empirically effective but theoretically …
As larger deep learning models are hard to interpret, there has been a recent focus on generating explanations of these black-box models. In contrast, we may have apriori …
Prior theoretical and empirical works have established that semi-supervised learning algorithms can leverage the unlabeled data to improve over the labeled sample complexity …
Large neural networks trained in the overparameterized regime are able to fit noise to zero train error. Recent work of Nakkiran and Bansal has empirically observed that such networks …
Y Kou, Z Chen, Y Cao, Q Gu - International Conference on Learning …, 2023 - par.nsf.gov
Semi-supervised learning is a popular machine learning paradigm that utilizes a large amount of unlabeled data as well as a small amount of labeled data to facilitate learning …
Strong student models can learn from weaker teachers: when trained on the predictions of a weaker model, a strong pretrained student can learn to correct the weak model's errors and …
We investigate the generalization properties of a self-training algorithm with halfspaces. The approach learns a list of halfspaces iteratively from labeled and unlabeled training data, in …
V Feofanov, E Devijver, MR Amini - Journal of Machine Learning Research, 2024 - jmlr.org
In this paper, we propose a probabilistic framework for analyzing a multi-class majority vote classifier in the case where training data is partially labeled. First, we derive a multi-class …