We study the learning dynamics of self-predictive learning for reinforcement learning, a family of algorithms that learn representations by minimizing the prediction error of their own …
Z Wen, Y Li - Advances in Neural Information Processing …, 2022 - proceedings.neurips.cc
The surprising discovery of the BYOL method shows the negative samples can be replaced by adding the prediction head to the network. It is mysterious why even when there exist …
Contrastive learning has achieved state-of-the-art performance in various self-supervised learning tasks and even outperforms its supervised counterpart. Despite its empirical …
A lot of recent advances in unsupervised feature learning are based on designing features which are invariant under semantic data augmentations. A common way to do this is …
The vast majority of work in self-supervised learning have focused on assessing recovered features by a chosen set of downstream tasks. While there are several commonly used …
Learning from large amounts of unsupervised data and a small amount of supervision is an important open problem in computer vision. We propose a new semi-supervised learning …
In this work, we investigate the implicit regularization induced by teacher-student learning dynamics in self-distillation. To isolate its effect, we describe a simple experiment where we …
Self-predictive unsupervised learning methods such as BYOL or SimSIAM have shown impressive results, and counter-intuitively, do not collapse to trivial representations. In this …
JH Lee, D Yoon, BM Ji, K Kim, S Hwang - arXiv preprint arXiv:2304.03456, 2023 - arxiv.org
Linear probing (LP)(and $ k $-NN) on the upstream dataset with labels (eg, ImageNet) and transfer learning (TL) to various downstream datasets are commonly employed to evaluate …