How well do self-supervised models transfer?

L Ericsson, H Gouk… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Self-supervised visual representation learning has seen huge progress recently, but no
large scale evaluation has compared the many models now available. We evaluate the …

A broad study on the transferability of visual representations with contrastive learning

A Islam, CFR Chen, R Panda… - Proceedings of the …, 2021 - openaccess.thecvf.com
Tremendous progress has been made in visual representation learning, notably with the
recent success of self-supervised contrastive learning methods. Supervised contrastive …

Sit: Self-supervised vision transformer

S Atito, M Awais, J Kittler - arXiv preprint arXiv:2104.03602, 2021 - arxiv.org
Self-supervised learning methods are gaining increasing traction in computer vision due to
their recent success in reducing the gap with supervised learning. In natural language …

Self-supervised pretraining of visual features in the wild

P Goyal, M Caron, B Lefaudeux, M Xu, P Wang… - arXiv preprint arXiv …, 2021 - arxiv.org
Recently, self-supervised learning methods like MoCo, SimCLR, BYOL and SwAV have
reduced the gap with supervised methods. These results have been achieved in a control …

When does contrastive visual representation learning work?

E Cole, X Yang, K Wilber… - Proceedings of the …, 2022 - openaccess.thecvf.com
Recent self-supervised representation learning techniques have largely closed the gap
between supervised and unsupervised learning on ImageNet classification. While the …

Revisiting weakly supervised pre-training of visual perception models

M Singh, L Gustafson, A Adcock… - Proceedings of the …, 2022 - openaccess.thecvf.com
Abstract Model pre-training is a cornerstone of modern visual recognition systems. Although
fully supervised pre-training on datasets like ImageNet is still the de-facto standard, recent …

Leverage your local and global representations: A new self-supervised learning strategy

T Zhang, C Qiu, W Ke, S Süsstrunk… - Proceedings of the …, 2022 - openaccess.thecvf.com
Self-supervised learning (SSL) methods aim to learn view-invariant representations by
maximizing the similarity between the features extracted from different crops of the same …

Masked siamese networks for label-efficient learning

M Assran, M Caron, I Misra, P Bojanowski… - … on Computer Vision, 2022 - Springer
Abstract We propose Masked Siamese Networks (MSN), a self-supervised learning
framework for learning image representations. Our approach matches the representation of …

Slip: Self-supervision meets language-image pre-training

N Mu, A Kirillov, D Wagner, S Xie - European conference on computer …, 2022 - Springer
Recent work has shown that self-supervised pre-training leads to improvements over
supervised learning on challenging visual recognition tasks. CLIP, an exciting new …

Distilling self-supervised vision transformers for weakly-supervised few-shot classification & segmentation

D Kang, P Koniusz, M Cho… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
We address the task of weakly-supervised few-shot image classification and segmentation,
by leveraging a Vision Transformer (ViT) pretrained with self-supervision. Our proposed …