A Survey on Self-supervised Learning: Algorithms, Applications, and Future Trends

J Gui, T Chen, J Zhang, Q Cao, Z Sun… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Deep supervised learning algorithms typically require a large volume of labeled data to
achieve satisfactory performance. However, the process of collecting and labeling such data …

Self-supervised remote sensing feature learning: Learning paradigms, challenges, and future works

C Tao, J Qi, M Guo, Q Zhu, H Li - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Deep learning has achieved great success in learning features from massive remote
sensing images (RSIs). To better understand the connection between three feature learning …

Bigdatasetgan: Synthesizing imagenet with pixel-wise annotations

D Li, H Ling, SW Kim, K Kreis… - Proceedings of the …, 2022 - openaccess.thecvf.com
Annotating images with pixel-wise labels is a time-consuming and costly process. Recently,
DatasetGAN showcased a promising alternative-to synthesize a large labeled dataset via a …

Rlip: Relational language-image pre-training for human-object interaction detection

H Yuan, J Jiang, S Albanie, T Feng… - Advances in …, 2022 - proceedings.neurips.cc
Abstract The task of Human-Object Interaction (HOI) detection targets fine-grained visual
parsing of humans interacting with their environment, enabling a broad range of …

Dive into the details of self-supervised learning for medical image analysis

C Zhang, H Zheng, Y Gu - Medical Image Analysis, 2023 - Elsevier
Self-supervised learning (SSL) has achieved remarkable performance in various medical
imaging tasks by dint of priors from massive unlabeled data. However, regarding a specific …

Perfectly balanced: Improving transfer and robustness of supervised contrastive learning

M Chen, DY Fu, A Narayan, M Zhang… - International …, 2022 - proceedings.mlr.press
An ideal learned representation should display transferability and robustness. Supervised
contrastive learning (SupCon) is a promising method for training accurate models, but …

Revisiting the transferability of supervised pretraining: an mlp perspective

Y Wang, S Tang, F Zhu, L Bai, R Zhao… - Proceedings of the …, 2022 - openaccess.thecvf.com
The pretrain-finetune paradigm is a classical pipeline in visual learning. Recent progress on
unsupervised pretraining methods shows superior transfer performance to their supervised …

Large-scale unsupervised semantic segmentation

S Gao, ZY Li, MH Yang, MM Cheng… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
Empowered by large datasets, eg, ImageNet and MS COCO, unsupervised learning on
large-scale data has enabled significant advances for classification tasks. However, whether …

Pro-tuning: Unified prompt tuning for vision tasks

X Nie, B Ni, J Chang, G Meng, C Huo… - … on Circuits and …, 2023 - ieeexplore.ieee.org
In computer vision, fine-tuning is the de-facto approach to leverage pre-trained vision
models to perform downstream tasks. However, deploying it in practice is quite challenging …

Augmentations in graph contrastive learning: Current methodological flaws & towards better practices

P Trivedi, ES Lubana, Y Yan, Y Yang… - Proceedings of the ACM …, 2022 - dl.acm.org
Graph classification has a wide range of applications in bioinformatics, social sciences,
automated fake news detection, web document classification, and more. In many practical …