X Chen, K He - Proceedings of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
Siamese networks have become a common structure in various recent models for unsupervised visual representation learning. These models maximize the similarity between …
Abstract We introduce Bootstrap Your Own Latent (BYOL), a new approach to self- supervised image representation learning. BYOL relies on two neural networks, referred to …
By leveraging contrastive learning, clustering, and other pretext tasks, unsupervised methods for learning image representations have reached impressive results on standard …
This paper is concerned with self-supervised learning for small models. The problem is motivated by our empirical studies that while the widely used contrastive self-supervised …
S Kim, G Lee, S Bae, SY Yun - arXiv preprint arXiv:2010.06300, 2020 - researchgate.net
Contrastive learning has shown remarkable results in recent self-supervised approaches for visual representation. By learning to contrast positive pairs' representation from the …
Despite recent progress made by self-supervised methods in representation learning with residual networks, they still underperform supervised learning on the ImageNet classification …
This paper investigates two techniques for developing efficient self-supervised vision transformers (EsViT) for visual representation learning. First, we show through a …
Abstract We propose Masked Siamese Networks (MSN), a self-supervised learning framework for learning image representations. Our approach matches the representation of …
In this work, we introduce a self-supervised feature representation learning framework DreamTeacher that utilizes generative networks for pre-training downstream image …