A Survey on Self-supervised Learning: Algorithms, Applications, and Future Trends

J Gui, T Chen, J Zhang, Q Cao, Z Sun… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Deep supervised learning algorithms typically require a large volume of labeled data to
achieve satisfactory performance. However, the process of collecting and labeling such data …

Simmatch: Semi-supervised learning with similarity matching

M Zheng, S You, L Huang, F Wang… - Proceedings of the …, 2022 - openaccess.thecvf.com
Learning with few labeled data has been a longstanding problem in the computer vision and
machine learning research community. In this paper, we introduced a new semi-supervised …

Rethinking federated learning with domain shift: A prototype view

W Huang, M Ye, Z Shi, H Li, B Du - 2023 IEEE/CVF Conference …, 2023 - ieeexplore.ieee.org
Federated learning shows a bright promise as a privacy-preserving collaborative learning
technique. However, prevalent solutions mainly focus on all private data sampled from the …

Weakly supervised contrastive learning

M Zheng, F Wang, S You, C Qian… - Proceedings of the …, 2021 - openaccess.thecvf.com
Unsupervised visual representation learning has gained much attention from the computer
vision community because of the recent achievement of contrastive learning. Most of the …

Green hierarchical vision transformer for masked image modeling

L Huang, S You, M Zheng, F Wang… - Advances in …, 2022 - proceedings.neurips.cc
We present an efficient approach for Masked Image Modeling (MIM) with hierarchical Vision
Transformers (ViTs), allowing the hierarchical ViTs to discard masked patches and operate …

solo-learn: A library of self-supervised methods for visual representation learning

VGT Da Costa, E Fini, M Nabi, N Sebe… - Journal of Machine …, 2022 - jmlr.org
This paper presents solo-learn, a library of self-supervised methods for visual representation
learning. Implemented in Python, using Pytorch and Pytorch lightning, the library fits both …

Downstream-agnostic adversarial examples

Z Zhou, S Hu, R Zhao, Q Wang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Self-supervised learning usually uses a large amount of unlabeled data to pre-train an
encoder which can be used as a general-purpose feature extractor, such that downstream …

Lightvit: Towards light-weight convolution-free vision transformers

T Huang, L Huang, S You, F Wang, C Qian… - arXiv preprint arXiv …, 2022 - arxiv.org
Vision transformers (ViTs) are usually considered to be less light-weight than convolutional
neural networks (CNNs) due to the lack of inductive bias. Recent works thus resort to …

Vitas: Vision transformer architecture search

X Su, S You, J Xie, M Zheng, F Wang, C Qian… - … on Computer Vision, 2022 - Springer
Vision transformers (ViTs) inherited the success of NLP but their structures have not been
sufficiently investigated and optimized for visual tasks. One of the simplest solutions is to …

Effective sample pairs based contrastive learning for clustering

J Yin, H Wu, S Sun - Information Fusion, 2023 - Elsevier
As an indispensable branch of unsupervised learning, deep clustering is rapidly emerging
along with the growth of deep neural networks. Recently, contrastive learning paradigm has …