Byol works even without batch statistics

PH Richemond, JB Grill, F Altché, C Tallec… - arXiv preprint arXiv …, 2020 - arxiv.org
Bootstrap Your Own Latent (BYOL) is a self-supervised learning approach for image
representation. From an augmented view of an image, BYOL trains an online network to …

Exploring simple siamese representation learning

X Chen, K He - Proceedings of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
Siamese networks have become a common structure in various recent models for
unsupervised visual representation learning. These models maximize the similarity between …

Bootstrap your own latent-a new approach to self-supervised learning

JB Grill, F Strub, F Altché, C Tallec… - Advances in neural …, 2020 - proceedings.neurips.cc
Abstract We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-
supervised image representation learning. BYOL relies on two neural networks, referred to …

Beyond supervised vs. unsupervised: Representative benchmarking and analysis of image representation learning

M Gwilliam, A Shrivastava - … of the IEEE/CVF Conference on …, 2022 - openaccess.thecvf.com
By leveraging contrastive learning, clustering, and other pretext tasks, unsupervised
methods for learning image representations have reached impressive results on standard …

Seed: Self-supervised distillation for visual representation

Z Fang, J Wang, L Wang, L Zhang, Y Yang… - arXiv preprint arXiv …, 2021 - arxiv.org
This paper is concerned with self-supervised learning for small models. The problem is
motivated by our empirical studies that while the widely used contrastive self-supervised …

[PDF][PDF] Mixco: Mix-up contrastive learning for visual representation

S Kim, G Lee, S Bae, SY Yun - arXiv preprint arXiv:2010.06300, 2020 - researchgate.net
Contrastive learning has shown remarkable results in recent self-supervised approaches for
visual representation. By learning to contrast positive pairs' representation from the …

Pushing the limits of self-supervised resnets: Can we outperform supervised learning without labels on imagenet?

N Tomasev, I Bica, B McWilliams, L Buesing… - arXiv preprint arXiv …, 2022 - arxiv.org
Despite recent progress made by self-supervised methods in representation learning with
residual networks, they still underperform supervised learning on the ImageNet classification …

Efficient self-supervised vision transformers for representation learning

C Li, J Yang, P Zhang, M Gao, B Xiao, X Dai… - arXiv preprint arXiv …, 2021 - arxiv.org
This paper investigates two techniques for developing efficient self-supervised vision
transformers (EsViT) for visual representation learning. First, we show through a …

Masked siamese networks for label-efficient learning

M Assran, M Caron, I Misra, P Bojanowski… - … on Computer Vision, 2022 - Springer
Abstract We propose Masked Siamese Networks (MSN), a self-supervised learning
framework for learning image representations. Our approach matches the representation of …

Dreamteacher: Pretraining image backbones with deep generative models

D Li, H Ling, A Kar, D Acuna, SW Kim… - Proceedings of the …, 2023 - openaccess.thecvf.com
In this work, we introduce a self-supervised feature representation learning framework
DreamTeacher that utilizes generative networks for pre-training downstream image …