The need for organ transplants has risen, but the number of available organ donations for transplants has stagnated worldwide. Regenerative medicine has been developed to make …
The recent research in semi-supervised learning (SSL) is mostly dominated by consistency regularization based methods which achieve strong performance. However, they heavily …
Pre-training is a dominant paradigm in computer vision. For example, supervised ImageNet pre-training is commonly used to initialize the backbones of object detection and …
Semi-supervised learning is the branch of machine learning concerned with using labelled as well as unlabelled data to perform certain learning tasks. Conceptually situated between …
Q Xie, MT Luong, E Hovy… - Proceedings of the IEEE …, 2020 - openaccess.thecvf.com
We present a simple self-training method that achieves 88.4% top-1 accuracy on ImageNet, which is 2.0% better than the state-of-the-art model that requires 3.5 B weakly labeled …
Recent advances in domain adaptation show that deep self-training presents a powerful means for unsupervised domain adaptation. These methods often involve an iterative …
J Li, C Xiong, SCH Hoi - Proceedings of the IEEE/CVF …, 2021 - openaccess.thecvf.com
Semi-supervised learning has been an effective paradigm for leveraging unlabeled data to reduce the reliance on labeled data. We propose CoMatch, a new semi-supervised learning …
Semi-supervised learning, ie jointly learning from labeled and unlabeled samples, is an active research topic due to its key role on relaxing human supervision. In the context of …
State-of-the-art deep learning models are often trained with a large amount of costly labeled training data. However, requiring exhaustive manual annotations may degrade the model's …