Many recent self-supervised frameworks for visual representation learning are based on certain forms of Siamese networks. Such networks are conceptually symmetric with two …
This paper investigates two techniques for developing efficient self-supervised vision transformers (EsViT) for visual representation learning. First, we show through a …
Bootstrap Your Own Latent (BYOL) is a self-supervised learning approach for image representation. From an augmented view of an image, BYOL trains an online network to …
By leveraging contrastive learning, clustering, and other pretext tasks, unsupervised methods for learning image representations have reached impressive results on standard …
S Kim, G Lee, S Bae, SY Yun - arXiv preprint arXiv:2010.06300, 2020 - researchgate.net
Contrastive learning has shown remarkable results in recent self-supervised approaches for visual representation. By learning to contrast positive pairs' representation from the …
P Chen, S Liu, J Jia - … of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
Unsupervised representation learning with contrastive learning achieves great success recently. However, these methods have to duplicate each training batch to construct …
Abstract We propose Masked Siamese Networks (MSN), a self-supervised learning framework for learning image representations. Our approach matches the representation of …
C Tao, X Zhu, W Su, G Huang, B Li… - Proceedings of the …, 2023 - openaccess.thecvf.com
Self-supervised learning (SSL) has delivered superior performance on a variety of downstream vision tasks. Two main-stream SSL frameworks have been proposed, ie …
Contrastive methods have led a recent surge in the performance of self-supervised representation learning (SSL). Recent methods like BYOL or SimSiam purportedly distill …