Federated multi-task learning under a mixture of distributions

O Marfoq, G Neglia, A Bellet… - Advances in Neural …, 2021 - proceedings.neurips.cc
The increasing size of data generated by smartphones and IoT devices motivated the
development of Federated Learning (FL), a framework for on-device collaborative training of …

Dash: Semi-supervised learning with dynamic thresholding

Y Xu, L Shang, J Ye, Q Qian, YF Li… - International …, 2021 - proceedings.mlr.press
While semi-supervised learning (SSL) has received tremendous attentions in many machine
learning tasks due to its successful use of unlabeled data, existing SSL algorithms use either …

[HTML][HTML] Self-training: A survey

MR Amini, V Feofanov, L Pauletto, L Hadjadj… - Neurocomputing, 2025 - Elsevier
Self-training methods have gained significant attention in recent years due to their
effectiveness in leveraging small labeled datasets and large unlabeled observations for …

Not too little, not too much: a theoretical analysis of graph (over) smoothing

N Keriven - Advances in Neural Information Processing …, 2022 - proceedings.neurips.cc
We analyze graph smoothing with mean aggregation, where each node successively
receives the average of the features of its neighbors. Indeed, it has quickly been observed …

Understanding self-training for gradual domain adaptation

A Kumar, T Ma, P Liang - International conference on …, 2020 - proceedings.mlr.press
Abstract Machine learning systems must adapt to data distributions that evolve over time, in
applications ranging from sensor networks and self-driving car perception modules to brain …

Flatmatch: Bridging labeled data and unlabeled data with cross-sharpness for semi-supervised learning

Z Huang, L Shen, J Yu, B Han… - Advances in Neural …, 2023 - proceedings.neurips.cc
Abstract Semi-Supervised Learning (SSL) has been an effective way to leverage abundant
unlabeled data with extremely scarce labeled data. However, most SSL methods are …

Towards making unlabeled data never hurt

YF Li, ZH Zhou - IEEE transactions on pattern analysis and …, 2014 - ieeexplore.ieee.org
It is usually expected that learning performance can be improved by exploiting unlabeled
data, particularly when the number of labeled data is limited. However, it has been reported …

[PDF][PDF] 半监督学习方法

刘建伟, 刘媛, 罗雄麟 - 计算机学报, 2015 - researchgate.net
摘要半监督学习研究如何同时利用有类标签的样本和无类标签的样例改进学习性能,
成为近年来机器学习领域的研究热点. 鉴于半监督学习的理论意义和实际应用价值 …

[PDF][PDF] Semi-supervised novelty detection

G Blanchard, G Lee, C Scott - The Journal of Machine Learning Research, 2010 - jmlr.org
A common setting for novelty detection assumes that labeled examples from the nominal
class are available, but that labeled examples of novelties are unavailable. The standard …

Unlabeled data: Now it helps, now it doesn't

A Singh, R Nowak, J Zhu - Advances in neural information …, 2008 - proceedings.neurips.cc
Empirical evidence shows that in favorable situations semi-supervised learning (SSL)
algorithms can capitalize on the abundancy of unlabeled training data to improve the …